Autonomous service robots have become a key research topic in robotics, particularly for household chores. A typical home scenario is highly unconstrained and a service robot needs to adapt constantly to new situations. In this paper, we address the problem of autonomous cleaning tasks in uncontrolled environments. In our approach, a human instructor uses kinestethic demonstrations to teach a robot how to perform different cleaning tasks on a table. Then, we use Task Parametrized Gaussian Mixture Models (TP-GMMs) to encode the demonstrations variability, while providing appropriate generalization abilities. TP-GMMs extend Gaussian Mixture Models with an auxiliary set of reference frames, in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitude or orientations. However, the reference frames (that parametrize TP-GMMs) can be very difficult to extract in practice, as it may require segmenting the cluttered images of the working table-top. Instead, in this work the reference frames are automatically extracted from robot camera images, using a deep neural network that was trained during human demonstrations of a cleaning task. This approach has two main benefits: (i) it takes the human completely out of the loop while performing complex cleaning tasks; and (ii) the network is able to identify the specific task to be performed directly from image data, thus also enabling automatic task selection from a set of previously demonstrated tasks. The system was implemented on the iCub humanoid robot. During the tests, the robot was able to successfully clean a table with two different types of dirt (wiping a marker's scribble or sweeping clusters of lentils).
ICub, clean the table! A robot learning from demonstration approach using deep neural networks
KIM, Jaeseok
;Cauli, Nino;Cavallo, Filippo;
2018-01-01
Abstract
Autonomous service robots have become a key research topic in robotics, particularly for household chores. A typical home scenario is highly unconstrained and a service robot needs to adapt constantly to new situations. In this paper, we address the problem of autonomous cleaning tasks in uncontrolled environments. In our approach, a human instructor uses kinestethic demonstrations to teach a robot how to perform different cleaning tasks on a table. Then, we use Task Parametrized Gaussian Mixture Models (TP-GMMs) to encode the demonstrations variability, while providing appropriate generalization abilities. TP-GMMs extend Gaussian Mixture Models with an auxiliary set of reference frames, in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitude or orientations. However, the reference frames (that parametrize TP-GMMs) can be very difficult to extract in practice, as it may require segmenting the cluttered images of the working table-top. Instead, in this work the reference frames are automatically extracted from robot camera images, using a deep neural network that was trained during human demonstrations of a cleaning task. This approach has two main benefits: (i) it takes the human completely out of the loop while performing complex cleaning tasks; and (ii) the network is able to identify the specific task to be performed directly from image data, thus also enabling automatic task selection from a set of previously demonstrated tasks. The system was implemented on the iCub humanoid robot. During the tests, the robot was able to successfully clean a table with two different types of dirt (wiping a marker's scribble or sweeping clusters of lentils).File | Dimensione | Formato | |
---|---|---|---|
C039 - ICub clean the table.pdf
accesso aperto
Tipologia:
Documento in Pre-print/Submitted manuscript
Licenza:
Dominio pubblico
Dimensione
4 MB
Formato
Adobe PDF
|
4 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.