TY - JOUR
T1 - Residual Learning From Demonstration
T2 - Adapting DMPs for Contact-Rich Manipulation
AU - Davchev, Todor
AU - Luck, Kevin Sebastian
AU - Burke, Michael
AU - Meier, Franziska
AU - Schaal, Stefan
AU - Ramamoorthy, Subramanian
PY - 2022/4/1
Y1 - 2022/4/1
N2 - Manipulation skills involving contact and friction are inherent to many robotics tasks. Using the class of motor primitives for peg-in-hole like insertions, we study how robots can learn such skills. Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion. Policy adaptation strategies such as residual learning can help improve the overall performance of policies in the context of contact-rich manipulation. However, it is not clear how to best do this with DMPs. As a result, we consider several possible ways for adapting a DMP formulation and propose 'residual Learning from Demonstration' (rLfD), a framework that combines DMPs with Reinforcement Learning (RL) to learn a residual correction policy. Our evaluations suggest that applying residual learning directly in task space and operating on the full pose of the robot can significantly improve the overall performance of DMPs. We show that rLfD offers a gentle to the joints solution that improves the task success and generalisation of DMPs and enables transfer to different geometries and frictions through few-shot task adaptation. The proposed framework is evaluated on a set of tasks. A simulated robot and a physical robot have to successfully insert pegs, gears and plugs into their respective sockets.
AB - Manipulation skills involving contact and friction are inherent to many robotics tasks. Using the class of motor primitives for peg-in-hole like insertions, we study how robots can learn such skills. Dynamic Movement Primitives (DMP) are a popular way of extracting such policies through behaviour cloning (BC) but can struggle in the context of insertion. Policy adaptation strategies such as residual learning can help improve the overall performance of policies in the context of contact-rich manipulation. However, it is not clear how to best do this with DMPs. As a result, we consider several possible ways for adapting a DMP formulation and propose 'residual Learning from Demonstration' (rLfD), a framework that combines DMPs with Reinforcement Learning (RL) to learn a residual correction policy. Our evaluations suggest that applying residual learning directly in task space and operating on the full pose of the robot can significantly improve the overall performance of DMPs. We show that rLfD offers a gentle to the joints solution that improves the task success and generalisation of DMPs and enables transfer to different geometries and frictions through few-shot task adaptation. The proposed framework is evaluated on a set of tasks. A simulated robot and a physical robot have to successfully insert pegs, gears and plugs into their respective sockets.
UR - http://www.scopus.com/inward/record.url?scp=85124736650&partnerID=8YFLogxK
U2 - 10.1109/LRA.2022.3150024
DO - 10.1109/LRA.2022.3150024
M3 - Article
SN - 2377-3766
VL - 7
SP - 4488
EP - 4495
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
ER -