Vis enkel innførsel

dc.contributor.authorRuotsalainen, Laura
dc.contributor.authorMorrison, Aiden J
dc.contributor.authorMakela, Maija
dc.contributor.authorRantanen, Jesperi
dc.contributor.authorSokolova, Nadezda
dc.date.accessioned2022-04-29T12:02:59Z
dc.date.available2022-04-29T12:02:59Z
dc.date.created2021-08-22T19:36:04Z
dc.date.issued2021
dc.identifier.citationIEEE Sensors Journal. 2021, 22 (6), 4816-4826.en_US
dc.identifier.issn1530-437X
dc.identifier.urihttps://hdl.handle.net/11250/2993421
dc.description.abstractCollaborative navigation is the most promising technique for infrastructure-free indoor navigation for a group of pedestrians, such as rescue personnel. Infrastructure-free navigation means using a system that is able to localize itself independent of any equipment pre-installed to the building via using various sensors monitoring the motion of the user. The most feasible navigation sensors are inertial sensors and a camera providing motion information when a computer vision method called visual odometry is used. Collaborative indoor navigation sets challenges to the use of computer vision; navigation environment is often poor of tracked features, other pedestrians in front of the camera interfere with motion detection, and the size and cost constraints prevent the use of best quality cameras resulting in measurement errors. We have developed an improved computer vision based collaborative navigation method addressing these challenges via using a depth (RGB-D) camera, a deep learning based detector to avoid using features found from other pedestrians and for controlling the inconsistency of object depth detection, which would degrade the accuracy of the visual odometry solution if not controlled. We have compared our visual odometry solution to a one obtained using the same low-cost RGB-D camera but no corrections, and find the solution much improved. Finally, we show the result for computing the solution using visual odometry and inertial sensor fusion for the individual and UWB ranging for collaborative navigation.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.subjectMaskinlæringen_US
dc.subjectMachine learningen_US
dc.subjectNavigasjonen_US
dc.subjectNavigationen_US
dc.titleImproving Computer Vision-Based Perception for Collaborative Indoor Navigationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holderCCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.en_US
dc.subject.nsiVDP::Informasjons- og kommunikasjonssystemer: 321en_US
dc.subject.nsiVDP::Information and communication systems: 321en_US
dc.source.pagenumber4816-4826en_US
dc.source.volume22en_US
dc.source.journalIEEE Sensors Journalen_US
dc.source.issue6en_US
dc.identifier.doi10.1109/JSEN.2021.3106257
dc.identifier.cristin1927867
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal