Robots will become part of our everyday life as helpers and companions, sharing the environment with us. Thus, robots should become social and able to naturally interact with the users. Recognizing human activities and behaviors will enhance the capabilities of the robot to plan an appropriate action and tailor the approach according to what the user is doing. Therefore, this paper addresses the problem of providing mobile robots with the ability to recognize common daily activities. The fusion of heterogeneous data gathered by multiple sensing strategies, namely wearable inertial sensors, depth camera, and location features, is proposed to improve the recognition of human activity. In particular, the proposed work aims to recognize 10 activities using data from a depth camera mounted on a mobile robot able to self-localize in the environment and from customized sensors worn on the hand. Twenty users were asked to perform the selected activities in two different relative positions between them and the robot while the robot was moving. The analysis was carried out considering different combinations of sensors to evaluate how the fusion of the different technologies improves the recognition abilities. The results show an improvement of 13% in the F-measure when different sensors are considered with respect to the use of the sensors of the robot. In particular, the system is able to recognize not only the performed activity, but also the relative position, enhancing the robot capabilities to interact with the users.

Enhancing Activity Recognition of Self-Localized Robot Through Depth Camera and Wearable Sensors

Manzi, Alessandro;Moschetti, Alessandra;Limosani, Raffaele;Fiorini, Laura;Cavallo, Filippo
2018-01-01

Abstract

Robots will become part of our everyday life as helpers and companions, sharing the environment with us. Thus, robots should become social and able to naturally interact with the users. Recognizing human activities and behaviors will enhance the capabilities of the robot to plan an appropriate action and tailor the approach according to what the user is doing. Therefore, this paper addresses the problem of providing mobile robots with the ability to recognize common daily activities. The fusion of heterogeneous data gathered by multiple sensing strategies, namely wearable inertial sensors, depth camera, and location features, is proposed to improve the recognition of human activity. In particular, the proposed work aims to recognize 10 activities using data from a depth camera mounted on a mobile robot able to self-localize in the environment and from customized sensors worn on the hand. Twenty users were asked to perform the selected activities in two different relative positions between them and the robot while the robot was moving. The analysis was carried out considering different combinations of sensors to evaluate how the fusion of the different technologies improves the recognition abilities. The results show an improvement of 13% in the F-measure when different sensors are considered with respect to the use of the sensors of the robot. In particular, the system is able to recognize not only the performed activity, but also the relative position, enhancing the robot capabilities to interact with the users.
2018
File in questo prodotto:
File Dimensione Formato  
IP041 - Enhancing activity recognition of self-localized robot through depth camera and wearable sensors.pdf

accesso aperto

Tipologia: Documento in Pre-print/Submitted manuscript
Licenza: Dominio pubblico
Dimensione 612.24 kB
Formato Adobe PDF
612.24 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/525955
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
social impact