I am interested in how learning algorithms can enable machines to acquire general notions of intelligence, allowing them to autonomously learn a variety of complex sensorimotor skills in real-world settings. This includes learning deep representations for representing complex skills from raw sensory inputs, enabling machines to learn on their own, without human supervision, and allowing systems to build upon what they've learned previously to acquire new capabilities with small amounts of experience.
We wrote a blog post describing our latest work on learning a single model from unsupervised interaction that can be used to accomplish many different tasks. Our approach enables robots to use visual foresight to plan to achieve goals.
My colleagues and I have released the robotic grasping and pushing data used in Levine et al. '16 (ISER) and Finn et al. '16 (NIPS): Google Brain Robotics Data.
Recent Talk (October 2018)
Invited Talks and Lectures
At NeurIPS2018, I gave invited talks at the Continual Learning Workshop (slides here), the workshop on Learning to Model the Physical World (slides here), and the workshop on Spatiotemporal Modeling (slides here).
In December 2018, I gave a tutorial on model-based reinforcement learning at the CIFAR LMB program meeting (slides here).
In September 2018, I gave a 3-minute talk at EmTech (video here)
In July 2018, I gave a talk at Google DeepMind with Sergey Levine on meta-learning frontiers. (slides here)
We propose CACTUs, an unsupervised learning algorithm that learns to learn tasks constructed from unlabeled data. CACTUs leads to significantly more effective downstream learning and enables few-shot learning without requiring labeled meta-learning datasets.
We aim to learn multi-stage vision-based tasks on a real robot from a single video of a human performing the task. We propose a method that learns both how to learn primitive behaviors from video demonstrations and how to dynamically compose these behaviors to perform multi-stage tasks by "watching" a human demonstrator.
We propose a technique that uses time-reversal to learn goals and provide a high level plan to reach them. In particular, our approach explores outward from a set of goal states and learns to predict these trajectories in reverse, which provides a high-level plan towards goals.
While meta-learning enables fast learning of new tasks, it requires a human to specify a distribution over tasks for meta-training. In effect, meta-learning offloads the design burden from algorithm design to task design. We propose to automate the design of tasks for meta-learning, describing a family of unsupervised meta-reinforcement learning algorithms that are truly automated.
Learning the objective underlying example behavior is a challenging, under-defined problem, particularly when only a few demonstrations are available. However, there is structure among the type of behaviors that we might want agents to learn. We learn this structure from demonstrations across many tasks, acquiring a prior over intentions, and use this learned prior to infer reward functions for new tasks from only a few demonstrations.
We propose a method that learns how to adapt online to new situations and perturbations, through meta reinforcement learning. Unlike prior meta-RL methods,
our approach is model-based, making it sample-efficient during meta-training and thus practical for real world problems.
We combine latent variable models with adversarial training to build a video prediction model that produces predictions that look more realistic to human raters and better cover the range of possible futures.
Few-shot learning problems can be ambiguous. We propose a modification of the MAML algorithm that can handle ambiguity by sampling different multiple classifiers. Our approach uses a Bayesian formulation of meta-learning, building upon prior work on hierarchical Bayesian models and variational inference.
We develop a clear and formal definition of the meta-learning problem, its terminology, and desirable properties of meta-learning algorithms. Building upon these foundations, we present a class of model-agnostic meta-learning methods that embed gradient-based optimization into the learner. Finally, we show how these methods can be extended for applications in motor control by combining elements of meta-learning with techniques for deep model-based reinforcement learning, imitation learning, and inverse reinforcement learning.
Specifying a reward or objective in the real world is hard. We propose a method that enables a robot to learn an objective from a few images of success by leveraging a dataset of positive and negative examples of previous tasks. We show how the objectives learned with our method can be used for both planning in the real world and reinforcement learning in simulation.
Planning with video prediction models trained on self-supervised data allows robots to learn diverse manipulation skills. However, to recover from disturbances and inaccurate predictions, we need to track pixels continuously to evaluate the planning objective at each timestep. We propose a self-supervised image-to-image registration model that enables robust behavior.
We propose to embed differentiable planning within a goal-directed policy, integrating planning and representation learning. Our approach optimizes for
representations that lead to effective goal-based planning for visual tasks. Our results show that the representation not only allow for effective goal-based
planning through imitation, but also transfers to more complex robot morphologies and action spaces.
We develop a domain-adaptive meta-learning method that allows for one-shot learning under domain shift. We show that our method can enable a robot to learn to maneuver a new object after seeing just
one video of a human performing the task with that object.
We show that model-agnostic meta-learning (MAML), which embeds gradient descent into the meta-learning algorithm, can be as expressive as black-box meta-learners: both can approximate any learning algorithm.
Furthermore, we empirically show that MAML consistently finds learning strategies that generalize to new tasks better than recurrent meta-learners.
We reformulate the model-agnostic meta-learning algorithm (MAML) as a method for probabilistic inference in a hierarchical Bayesian model.
Unlike prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to large function approximators, like neural networks.
Our interpretation sheds light on the meta-learning procedure and allows us to derive an improved version of the MAML algorithm.
We present a stochastic video prediction method, SV2P, that builds upon the conditional variational autoencoder to make stochastic predictions of future video.
We find that pretraining is crucial for enabling stochasticity. Our experiments demonstrate stochastic multi-frame predictions on three real world video datasets.
We propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects.
Our results indicate that several simple methods provide a surprisingly strong
competitor to popular deep RL algorithms such as double Q-learning, and our analysis sheds light on the relative tradeoffs between the methods.
Using demonstration data from a variety of tasks, our method enables a real robot to learn a new related skill, trained end-to-end, using a single visual demonstration of the skill. Our approach also allows for the provided demonstration to be a raw video, without access to the joint trajectory or controls applied to the robot arm.
We present three simple improvements to our prior work on self-supervised visual foresight that lead to substantially better visual planning capabilities. Our
method can perform tasks that require longer-term planning and involve multiple objects.
We propose a model-agnostic algorithm for meta-learning, where a model's parameters
are trained such that a small number of gradient updates with a small amount of training data from a new task
will produce good generalization performance on that task. Our method learns a classifier that can recognize
images of new characters using only a few examples, and a policy that can rapidly adapt
its behavior in simulated locomotion tasks.
We formalize the problem of semi-supervised reinforcement learning (SSRL), motivated by real-world scenarios where reward information
is only available in a limited set of scenarios such as when a human supervisor is present, or in a controlled laboratory setting.
We develop a simple algorithm for SSRL based on inverse reinforcement learning and show that it can improve performance by using
We combine an action-conditioned predictive model of images, "visual foresight," with model-predictive control for planning how
to push objects. The method is entirely self-supervised, requiring minimal human involvement.
We present a new guided policy search algorithm that allows the method to be used in domains where the initial conditions are stochastic, which makes the method
more applicable to general reinforcement learning problems and improves generalization performance in our robotic manipulation experiments.
We show that a sample-based algorithm for maximum entropy inverse reinforcement learning (MaxEnt IRL) corresponds to a generative adversarial network (GAN) with a particular choice of discriminator.
Since MaxEnt IRL is simply an energy-based model (EBM) for behavior, we further show that GANs optimize EBMs with the corresponding discriminator,
pointing to a simple and scalable EBM training procedure using GANs.
We propose a technique for learning an active learning strategy by combining one-shot learning and reinforcement learning, and allowing the model
to decide, during classification, which examples are worth labeling. Our experiments demonstrate that our model can trade-off
accuracy and label requests based on the reward function provided.
Our video prediction method predicts a transformation to apply to the previous image, rather than pixels values directly, leading to significantly improved multi-frame video prediction. We also introduce
a dataset of 50,000 robotic pushing sequences, consisting of over 1 million frames.
Collecting real-world robotic experience for learning an initial visual representation can be expensive. Instead, we show that it is possible to learn
a suitably good initial representation using data collected largely in simulation.
We propose an method for Inverse Reinforcement Learning (IRL) that can handle unknown dynamics and scale to flexible, nonlinear cost functions. We evaluate our algorithm on a series of simulated tasks and real-world robotic manipulation problems, including pouring and inserting dishes into a rack.
We learn a lower dimensional visual state-space without supervision using deep spatial autoencoders, and use it to learn nonprehensile manipulation
tasks, such as pushing a lego block and scooping a bag into a bowl.
We propose a method for learning recurrent neural network policies using continuous memory states. The method learns to store information in and use the memory states
using trajectory optimization. Our method outperforms vanilla RNN and LSTM baselines.
We consider the problem of selecting which demonstration to transfer to the current test scenario.
We frame the problem as an options Markov decision process (MDP) and develop an approach to learn a Q-function from expert demonstrations.
Our results show significant improvement over nearest-neighbor selection.