The introduction of Digital Human Modeling and Virtual Production in the industrial field has made possible to bring the user to the center of the project in order to guarantee the safety of workers and well-being in the performance of any activity. Traditional methods of motion capture are unable to represent user interaction with the environment. The user runs a simulation without the realistic objects, so his behavior and his movements are inaccurate due to the lack of real interaction. Mixed reality, through a combination of real objects and virtual environment, allows to increase the human-object interaction, improving the accuracy of the simulation. A real-time motion capture system produces considerable advantages: the possibility of modifying the action performed by the simulator in real time, the possibility of modifying the user’s posture and obtaining feedback on it, and finally, after having suffered a post - data processing, without first processing the recorded animation. These developments have introduced Motion Capture (MoCap) technology into industrial applications, which is used for assessing and occupational safety risks, maintenance procedures and assembly steps. However, real-time motion capture techniques are very expensive due to the required equipment. The aim of this work, therefore, is to create an inexpensive MoCap tool while maintaining high accuracy in the acquisition. In this work, the potential of the Unreal Engine softwarewas initially analyzed, in terms of ergonomic simulations. Subsequently, a case study was carried out inside the passenger compartment of the vehicle, simulating an infotainment reachability test and acquiring the law of motion. This procedurewas performed through two cheap MoCap techniques: through an optical system, using ArUco markers and through a markerless optical system, using the Microsoft Kinect® as a depth sensor. The comparison of the results showed an average difference, in terms of calculated angles, between the two methodologies, of about 2,5 degrees. Thanks to this small error, the developed methods allows to have a simulation in mixed reality with user’s presence and offers an accurate analysis of performed movements.

Virtual human centered design: an affordable and accurate tool for motion capture in mixed reality

Fontana, Carlotta
Membro del Collaboration Group
;
Califano, Rosaria
Membro del Collaboration Group
;
Cappetti, Nicola
Membro del Collaboration Group
;
Naddeo, Alessandro
Membro del Collaboration Group
2022

Abstract

The introduction of Digital Human Modeling and Virtual Production in the industrial field has made possible to bring the user to the center of the project in order to guarantee the safety of workers and well-being in the performance of any activity. Traditional methods of motion capture are unable to represent user interaction with the environment. The user runs a simulation without the realistic objects, so his behavior and his movements are inaccurate due to the lack of real interaction. Mixed reality, through a combination of real objects and virtual environment, allows to increase the human-object interaction, improving the accuracy of the simulation. A real-time motion capture system produces considerable advantages: the possibility of modifying the action performed by the simulator in real time, the possibility of modifying the user’s posture and obtaining feedback on it, and finally, after having suffered a post - data processing, without first processing the recorded animation. These developments have introduced Motion Capture (MoCap) technology into industrial applications, which is used for assessing and occupational safety risks, maintenance procedures and assembly steps. However, real-time motion capture techniques are very expensive due to the required equipment. The aim of this work, therefore, is to create an inexpensive MoCap tool while maintaining high accuracy in the acquisition. In this work, the potential of the Unreal Engine softwarewas initially analyzed, in terms of ergonomic simulations. Subsequently, a case study was carried out inside the passenger compartment of the vehicle, simulating an infotainment reachability test and acquiring the law of motion. This procedurewas performed through two cheap MoCap techniques: through an optical system, using ArUco markers and through a markerless optical system, using the Microsoft Kinect® as a depth sensor. The comparison of the results showed an average difference, in terms of calculated angles, between the two methodologies, of about 2,5 degrees. Thanks to this small error, the developed methods allows to have a simulation in mixed reality with user’s presence and offers an accurate analysis of performed movements.
2022
9781958651261
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4817775
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact