Abhishek Gupta


Ph.D. student, UC Berkeley EECS
Office & Mailing Address:
750 Sutardja Dai Hall
Berkeley CA 94720
Email:
Github:
https://github.com/abhishekunique
I am a Ph.D. student in EECS at UC Berkeley advised by Professor Pieter Abbeel and Professor Sergey Levine in the Berkeley Artificial Intelligence Research (BAIR) Lab. Previously I was an undergraduate EECS Major at UC Berkeley working with Professor Pieter Abbeel.

My main research goal is to develop algorithms which enable robotic systems to learn how to perform complex tasks quickly and efficiently in a variety of unstructured environments. I am currently working in the areas of transfer learning and quick adaptation for Deep Reinforcement Learning algorithms applied to robotic systems. I am also working on getting robotic hands to be able to learn a variety of complex dexterous skills using Deep Reinforcement Learning. In the past I have worked on video prediction, learning from demonstration and hierarchical planning.

Preprints and Tech Reports

YuXuan Liu*, Abhishek Gupta*, Pieter Abbeel, Sergey Levine
Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
[PDF][Video][arXiv]

This work addresses the problem of learning behaviors by observing raw video demonstrations. We aim to enable a robot to learn complex manipulation behaviors by observing demonstration videos of a task being performed by a human demonstrator in a different context (eg viewpoint, lighting conditions, distractors, etc) than the one it has the perform the task in. We learn a context aware translation model which is able to model these context changes, and use a simple feature tracking perceptual reward in order to enable imitation from arbitrary contexts. We provide a large variety of experiments in both simulation and using a real 7 DoF sawyer robotic arm to illustrate our method.

Publications

Vikash Kumar, Abhishek Gupta, Emanuel Todorov, Sergey Levine
Learning Dexterous Manipulation Policies from Experience and Imitation
Accepted in IJRR Special Issue on Deep Learning
[PDF][Video][arXiv]

This paper presents simulated and real-world experiments on learning dexterous manipulation policies for a 5-finger robotic hand. Complex skills such as in-hand manipulation and grasping are learned through a combination of autonomous experience and imitation of a human operator. The skills can be represented as time-varying linear-Gaussian controllers, ensembles of time-varying controllers indexed via a nearest neighbor method, and deep neural networks.

Abhishek Gupta*, Coline Devin*, Yuxuan Liu, Pieter Abbeel, Sergey Levine
Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning
Published as a conference paper at ICLR 2017.
[PDF][Video][OpenReview]

In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents. We introduce a problem formulation where two agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills

Abhishek Gupta*, Coline Devin*, Yuxuan Liu, Pieter Abbeel, Sergey Levine
Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer
ICRA 2017.
[PDF][Video][arXiv]

We propose modular policy networks, a general approach for transferring components of neural network policies between robots, tasks, and other degrees of variation. Modular policy networks consist of modules that can be mixed and matches to perform new robot-task combinations (or, in general, other combinations of the degrees of variation). For example, a module for opening a drawer can be combined with a module for controlling a four-link robot arm to enable a four-link arm to open the drawer. We demonstrate that modular policy networks can transfer knowledge to new tasks and even perform zero-shot learning for new task-robot combinations.

   Abhishek Gupta, Clemens Eppner, Sergey Levine, Pieter Abbeel
Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstration
IROS 2016.
[PDF][Video][arXiv]

In this work, we present a method for learning dexterous manipulation skills for a low-cost, soft robotic hand. We show how we can learn a variety of motion skills using object-centric human demonstrations: demonstrations where the human manipulates an object using his own hand, and the robot then learns to track the trajectory of the object. By tracking a variety of human demonstrations with different initial conditions, the robot can acquire a generalizable neural network policy that can carry out the demonstrated behavior under new conditions. Control is performed directly at the level of inflation and deflation commands to the soft hand, and we demonstrate the method on a range of tasks, including turning a valve, moving the beads on an abacus, and grasping a bottle.

Rohan Chitnis, Dylan Hadfield-Menell, Abhishek Gupta, Siddhart Srivastava, Edward Groshev, Christopher Lin, Pieter Abbeel
Guided Search for Task and Motion Plans Using Learned Heuristics
ICRA 2016.
[PDF][Video]

Task and motion planning(TAMP) methods integrate logical search over high-level actions with geometric reasoning to address this challenge. We present an algorithm that searches the space of possible task and motion plans and uses statistical machine learning to guide the search process. Our contributions are as follows:1) We present a complete algorithm for TAMP 2) We present a randomized local search algorithm for plan refinement that is easily formulated as a Markov decision process(MDP) 3) We apply reinforcement learning(RL) to learn a policy for this MDP 4) We learn from expert demonstrations to efficiently search the space of high-level task plans, given options that address different infeasibilities and 5) We run experiments to evaluate our system in a variety of simulated domains.

Alex Lee, Abhishek Gupta, Henry Lu, Sergey Levine, Pieter Abbeel
Learning from Multiple Demonstrations using Trajectory-Aware Non-Rigid Registration with Applications to Deformable Object Manipulation
IROS 2015.
[PDF]

Trajectory transfer using point cloud registration is powerful tool for learning from demonstration, but is typically unaware of which elements of the scene are relevant to the task. In this work, we determine relevance by considering the demonstrated trajectory, and perform registration with a trajectory-aware method to improve generalization.

Alex X. Lee, Henry Lu, Abhishek Gupta, Sergey Levine, Pieter Abbeel
Learning Force-Based Manipulation of Deformable Objects from Multiple Demonstrations.
ICRA 2015.
[PDF]

This paper combines trajectory transfer via point cloud registration with variable impedance control, in order to improve the generalization of behaviors that require a mix of precise, high-gain motion and force-driven behaviors like straightening a towel. Multiple example demonstrations are analyzed to determine which part of the motion should emphasize precise positioning, and which part requires matching the demonstrated forces. The method is demonstrated on rope tying, towel folding, and erasing a whiteboard.

Siddharth Srivastava, Shlomo Zilberstein, Abhishek Gupta, Pieter Abbeel, Stuart Russell
Tractability of Planning with Loops
AAAI 2015.
[PDF][Video]

In this work, we create a unified framework for analyzing and synthesizing plans with loops for solving problems with non-deterministic numeric effects and a limited form of partial observability. Three different action models with deterministic, qualitative non-deterministic and Boolean non-deterministic semantics are handled using a single abstract representation. We establish the conditions under which the correctness and termination of solutions, represented as ab-stract policies, can be verified. We also examine the feasibility of learning abstract policies from examples. We demonstrate our techniques on several planning problems and show that they apply to challenging real-world tasks such as doing the laundry with a PR2 robot.

Research Support

National Science Foundation Graduate Research Fellowship, 2016-present