High precision control of soft robots is challenging due to their stohcastic behavior and material-dependent nature. While RL has been applied in soft robotics, achieving precision in task execution is still a long way off. Traditionally, RL requires substantial data for convergence, often obtained from a training environment. Yet, despite exhibiting high accuracy in the training environment, RL-policies often fall short in reality due to the training-to-reality gap, and the performance is exacerbated by the stochastic nature of soft robots. This study paves the way for the implementation of RL for soft robot control to achieve high precision in task execution. Two sample-efficient adaptive control strategies are proposed that leverage the RL-policy. The schemes can overcome stochasticity, bridge the training-to-reality gap, and attain desired accuracy even in challenging tasks, such as obstacle avoidance. In addition, deliberate and reversible damage is induced to the pneumatic actuation chamber, altering the soft robot's behavior to test the adaptability of our solutions. Despite the damage, desired accuracy was achieved in most scenarios without needing to retrain the RL-policy.
RL-Based Adaptive Controller for High Precision Reaching in a Soft Robot Arm
Nazeer M. S.
;Laschi C.;Falotico E.
2024-01-01
Abstract
High precision control of soft robots is challenging due to their stohcastic behavior and material-dependent nature. While RL has been applied in soft robotics, achieving precision in task execution is still a long way off. Traditionally, RL requires substantial data for convergence, often obtained from a training environment. Yet, despite exhibiting high accuracy in the training environment, RL-policies often fall short in reality due to the training-to-reality gap, and the performance is exacerbated by the stochastic nature of soft robots. This study paves the way for the implementation of RL for soft robot control to achieve high precision in task execution. Two sample-efficient adaptive control strategies are proposed that leverage the RL-policy. The schemes can overcome stochasticity, bridge the training-to-reality gap, and attain desired accuracy even in challenging tasks, such as obstacle avoidance. In addition, deliberate and reversible damage is induced to the pneumatic actuation chamber, altering the soft robot's behavior to test the adaptability of our solutions. Despite the damage, desired accuracy was achieved in most scenarios without needing to retrain the RL-policy.File | Dimensione | Formato | |
---|---|---|---|
RL-Based_Adaptive_Controller_for_High_Precision_Reaching_in_a_Soft_Robot_Arm.pdf
accesso aperto
Tipologia:
PDF Editoriale
Licenza:
Copyright dell'editore
Dimensione
3.13 MB
Formato
Adobe PDF
|
3.13 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.