In Video Surveillance age, the monitoring activity, especially from unmanned vehicles, needs some degree of autonomy in the scenario interpretation. Video Analysis tasks are crucial for the target tracking and recognition; anyway, it would be desirable if a further level of understanding could provide a comprehensive, high-level scene description, by reflecting that human cognitive capability of providing a concise scene description that comes from the analysis of involved objects relationships and actions. This paper presents a smart system to identify mobile scene objects, such as people, vehicles, automatically, by analyzing the videos acquired by drones in flight, along with the activities they carried out, so as to depict what it happens in the scene from a high-level perspective. The system uses Artificial Vision methods to detect and track the mobile objects and the area where they move, and Semantic Web technologies to provide a high-level description of the scenario. Spatio/temporal relations among the tracked objects as well as simple object activities (events) are described. By semantic reasoning, the system is able to connect the simple activities into more complex activities, that better reflect a human-like description of a scenario portion. Tests conducted on several videos, showing scenarios set in different environments, return convincing results which affirm the effectiveness of the proposed approach.

A human-like description of scene events for a proper UAV-based video content analysis

Cavaliere Danilo;Loia Vincenzo
;
Saggese Alessia;Senatore Sabrina;Mario Vento
2019-01-01

Abstract

In Video Surveillance age, the monitoring activity, especially from unmanned vehicles, needs some degree of autonomy in the scenario interpretation. Video Analysis tasks are crucial for the target tracking and recognition; anyway, it would be desirable if a further level of understanding could provide a comprehensive, high-level scene description, by reflecting that human cognitive capability of providing a concise scene description that comes from the analysis of involved objects relationships and actions. This paper presents a smart system to identify mobile scene objects, such as people, vehicles, automatically, by analyzing the videos acquired by drones in flight, along with the activities they carried out, so as to depict what it happens in the scene from a high-level perspective. The system uses Artificial Vision methods to detect and track the mobile objects and the area where they move, and Semantic Web technologies to provide a high-level description of the scenario. Spatio/temporal relations among the tracked objects as well as simple object activities (events) are described. By semantic reasoning, the system is able to connect the simple activities into more complex activities, that better reflect a human-like description of a scenario portion. Tests conducted on several videos, showing scenarios set in different environments, return convincing results which affirm the effectiveness of the proposed approach.
File in questo prodotto:
File Dimensione Formato  
KNOSYS2019.pdf

Open Access dal 28/04/2021

Descrizione: versione accettata
Tipologia: Documento in Post-print (versione successiva alla peer review e accettata per la pubblicazione)
Licenza: Creative commons
Dimensione 3.96 MB
Formato Adobe PDF
3.96 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4724079
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 19
social impact