Monocular vision-based 3D scene understanding has been an integral part of many machine vision applications. Always, the objective is to measure the depth using a single RGB camera, which is at par with the depth cameras. In this regard, monocular vision-guided autonomous navigation of robots is rapidly gaining popularity among the research community. We propose an effective monocular vision-assisted method to measure the depth of an Unmanned Aerial Vehicle (UAV) from an impending frontal obstacle. This is followed by collision-free navigation in unknown GPS-denied environments. Our approach deals upon the fundamental principle of perspective vision that the size of an object relative to its field of view (FoV) increases as the center of projection moves closer towards the object. Our contribution involves modeling the depth followed by its realization through scale-invariant SURF features. Noisy depth measurements arising due to external wind, or the turbulence in the UAV, are rectified by employing a constant velocity-based Kalman filter model. Necessary control commands are then designed based on the rectified depth value to avoid the obstacle before collision. Rigorous experiments with SURF scale-invariant features reveal an overall accuracy of 88.6% with varying obstacles, in both indoor and outdoor environments. © 2023 Association for Computing Machinery.

Monocular Vision-aided Depth Measurement from RGB Images for Autonomous UAV Navigation

Fabio Narducci
;
Carmen Bisogni
;
2024-01-01

Abstract

Monocular vision-based 3D scene understanding has been an integral part of many machine vision applications. Always, the objective is to measure the depth using a single RGB camera, which is at par with the depth cameras. In this regard, monocular vision-guided autonomous navigation of robots is rapidly gaining popularity among the research community. We propose an effective monocular vision-assisted method to measure the depth of an Unmanned Aerial Vehicle (UAV) from an impending frontal obstacle. This is followed by collision-free navigation in unknown GPS-denied environments. Our approach deals upon the fundamental principle of perspective vision that the size of an object relative to its field of view (FoV) increases as the center of projection moves closer towards the object. Our contribution involves modeling the depth followed by its realization through scale-invariant SURF features. Noisy depth measurements arising due to external wind, or the turbulence in the UAV, are rectified by employing a constant velocity-based Kalman filter model. Necessary control commands are then designed based on the rectified depth value to avoid the obstacle before collision. Rigorous experiments with SURF scale-invariant features reveal an overall accuracy of 88.6% with varying obstacles, in both indoor and outdoor environments. © 2023 Association for Computing Machinery.
2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4845151
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact