State-Only Imitation Learning for Dexterous Manipulation

Ilija Radosavovic1
Xiaolong Wang12
Lerrel Pinto13
Jitendra Malik1
UC Berkeley1
UC San Diego2
New York University3
[paper]


Abstract: Modern model-free RL has demonstrated impressive results on a number of problems. However, complex domains like dexterous manipulation remain a challenge for RL due to the poor sample complexity. To address this, current approaches employ expert demonstrations in the form of state-action pairs, which are difficult to obtain for real-world settings such as learning from videos. In this work, we move toward a more realistic setting and explore state-only imitation learning. To tackle this setting, we train an inverse dynamics model and use it to predict actions for state-only demonstrations. The inverse dynamics model and the policy are trained jointly. Our method performs on par with state-action approaches and considerably outperforms RL alone. By not relying on expert actions, we are able to learn from demonstrations with different dynamics, morphologies, and objects.







Paper

[arXiv]
[bibtex]