A robust visual location estimation scheme in large-scale complex scenes is critical for location -relevant Internet of Things (IoT) applications such as autonomous vehicles and intelligent robots. However, it is challenging due to viewpoint changes, weak textures, and large -view scenes. To address the location ambiguities arising and positioning timeliness in complex scenes, we propose an efficient and precise location estimation method by priority matching -based pose verification. Priority matching -based pose verification consists of two modules: scene semantic verification and 3D-2D keypoints filtering, improving the performance of visual localization. For scene semantic verification, it effectively retrieves relevant 3D points that conform to the semantics of query images. This module compares the 3D points corresponding to the generated candidate poses with the query semantic image for semantic consistency, overcoming the ambiguity of scene descriptors in largeview and weak -texture scenes. For 3D-2D keypoints filtering, it considerably boosts pose verification and consequently improves pose accuracy under the scene with viewpoint changes. This module chooses 3D and 2D keypoints uniformly distributed in the scene by projection and voting, then backward matches with the query image. Experimental results show that our method achieves an average 78.86% probability of 1 m accuracy in viewpoint -changing, weak textures, and large -view scenes on the public InLoc indoor dataset, and improves the accuracy of the related state-of-the-art methods by 11.6% on three public outdoor datasets, including Aachen Day-night, RobotCar Seasons and CMU Seasons. These results demonstrate that our method provides a robust visual localization solution for edge -cloud collaborative IoT in complex and large-scale scenes.

Efficient and precise visual location estimation by effective priority matching-based pose verification in edge-cloud collaborative IoT

Castiglione, Aniello;
2024-01-01

Abstract

A robust visual location estimation scheme in large-scale complex scenes is critical for location -relevant Internet of Things (IoT) applications such as autonomous vehicles and intelligent robots. However, it is challenging due to viewpoint changes, weak textures, and large -view scenes. To address the location ambiguities arising and positioning timeliness in complex scenes, we propose an efficient and precise location estimation method by priority matching -based pose verification. Priority matching -based pose verification consists of two modules: scene semantic verification and 3D-2D keypoints filtering, improving the performance of visual localization. For scene semantic verification, it effectively retrieves relevant 3D points that conform to the semantics of query images. This module compares the 3D points corresponding to the generated candidate poses with the query semantic image for semantic consistency, overcoming the ambiguity of scene descriptors in largeview and weak -texture scenes. For 3D-2D keypoints filtering, it considerably boosts pose verification and consequently improves pose accuracy under the scene with viewpoint changes. This module chooses 3D and 2D keypoints uniformly distributed in the scene by projection and voting, then backward matches with the query image. Experimental results show that our method achieves an average 78.86% probability of 1 m accuracy in viewpoint -changing, weak textures, and large -view scenes on the public InLoc indoor dataset, and improves the accuracy of the related state-of-the-art methods by 11.6% on three public outdoor datasets, including Aachen Day-night, RobotCar Seasons and CMU Seasons. These results demonstrate that our method provides a robust visual localization solution for edge -cloud collaborative IoT in complex and large-scale scenes.
2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4862794
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact