Vis enkel innførsel

dc.contributor.authorDjenouri, Youcef
dc.contributor.authorHatleskog, Johan
dc.contributor.authorHjelmervik, Jon M.
dc.contributor.authorBjorne, Elias
dc.contributor.authorUtstumo, Trygve
dc.contributor.authorMobarhan, Milad
dc.date.accessioned2022-08-26T12:21:09Z
dc.date.available2022-08-26T12:21:09Z
dc.date.created2021-12-02T10:26:53Z
dc.date.issued2021
dc.identifier.citationApplied intelligence (Boston). 2021, 52, 8101-8117.en_US
dc.identifier.issn0924-669X
dc.identifier.urihttps://hdl.handle.net/11250/3013801
dc.description.abstractIn the heavy asset industry, such as oil & gas, offshore personnel need to locate various equipment on the installation on a daily basis for inspection and maintenance purposes. However, locating equipment in such GPS denied environments is very time consuming due to the complexity of the environment and the large amount of equipment. To address this challenge we investigate an alternative approach to study the navigation problem based on visual imagery data instead of current ad-hoc methods where engineering drawings or large CAD models are used to find equipment. In particular, this paper investigates the combination of deep learning and decomposition for the image retrieval problem which is central for visual navigation. A convolutional neural network is first used to extract relevant features from the image database. The database is then decomposed into clusters of visually similar images, where several algorithms have been explored in order to make the clusters as independent as possible. The Bag-of-Words (BoW) approach is then applied on each cluster to build a vocabulary forest. During the searching process the vocabulary forest is exploited to find the most relevant images to the query image. To validate the usefulness of the proposed framework, intensive experiments have been carried out using both standard datasets and images from industrial environments. We show that the suggested approach outperforms the BoW-based image retrieval solutions, both in terms of computing time and accuracy. We also show the applicability of this approach on real industrial scenarios by applying the model on imagery data from offshore oil platforms.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.subjectInformation retrievalen_US
dc.subjectDeep learningen_US
dc.subjectDecompositionen_US
dc.subjectPlace recognitionen_US
dc.titleDeep learning based decomposition for visual navigation in industrial platformsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© The Author(s) 2021en_US
dc.source.pagenumber8101-8117en_US
dc.source.volume52en_US
dc.source.journalApplied intelligence (Boston)en_US
dc.identifier.doi10.1007/s10489-021-02908-z
dc.identifier.cristin1963189
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal