Chelsea Finn
cbfinn at eecs dot berkeley dot edu

I am a PhD student in CS at UC Berkeley, where I work on machine learning for robotic perception and control. I am a part of Berkeley AI Research Lab (BAIR), advised by Pieter Abbeel and Sergey Levine. I recently spent time at Google Brain.

Before graduate school, I received a Bachelors in EECS at MIT, where I worked on several research projects, including an assistive technology project in CSAIL under Seth Teller and an animal biometrics project under Sai Ravela. I have also spent time at Counsyl, Google, and Sandia National Labs.

CV  /  Google Scholar  /  GitHub

News
Invited Talks and Lectures
Research

My research is at the intersection of machine learning, perception, and control for robotics. In particular, I'm interested in how learning algorithms can enable robots to autonomously acquire complex sensorimotor skills.

Recent Preprints


Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm
Chelsea Finn, Sergey Levine
arXiv

We show that model-agnostic meta-learning, which embeds gradient descent into the meta-learning algorithm, can be as expressive as black-box meta-learners: both can approximate any learning algorithm. Furthermore, we empirically show that gradient-based meta-learners consistently find learning strategies that generalize to new tasks better than recurrent meta-learners.

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine
arXiv / code (coming soon) / video results

We present a stochastic video prediction method, SV2P, that builds upon the conditional variational autoencoder to make stochastic predictions of future video. We find that pretraining is crucial for enabling stochasticity. Our experiments demonstrate stochastic multi-frame predictions on three real world video datasets.

All Papers


One-Shot Visual Imitation Learning via Meta-Learning
Chelsea Finn*, Tianhe Yu*, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Conference on Robot Learning (CoRL), 2017 (Long Talk)
arXiv / code (coming soon) / video

Using demonstration data from a variety of tasks, our method enables a real robot to learn a new related skill, trained end-to-end, using a single visual demonstration of the skill. Our approach also allows for the provided demonstration to be a raw video, without access to the joint trajectory or controls applied to the robot arm.

Self-Supervised Visual Planning with Temporal Skip Connections
Frederik Ebert, Chelsea Finn, Alex Lee, Sergey Levine
Conference on Robot Learning (CoRL), 2017 (Long Talk)
arXiv / code / video results and data

We present three simple improvements to our prior work on self-supervised visual foresight that lead to substantially better visual planning capabilities. Our method can perform tasks that require longer-term planning and involve multiple objects.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn, Pieter Abbeel, Sergey Levine
International Conference on Machine Learning (ICML), 2017
arXiv / blog post / code / video results

We propose a model-agnostic algorithm for meta-learning, where a model's parameters are trained such that a small number of gradient updates with a small amount of training data from a new task will produce good generalization performance on that task. Our method learns a classifier that can recognize images of new characters using only a few examples, and a policy that can rapidly adapt its behavior in simulated locomotion tasks.


Generalizing Skills with Semi-Supervised Reinforcement Learning
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine
International Conference on Learning Representations (ICLR), 2017
arXiv / video results / code

We formalize the problem of semi-supervised reinforcement learning (SSRL), motivated by real-world scenarios where reward information is only available in a limited set of scenarios such as when a human supervisor is present, or in a controlled laboratory setting. We develop a simple algorithm for SSRL based on inverse reinforcement learning and show that it can improve performance by using 'unlabeled' experience.

Deep Visual Foresight for Planning Robot Motion
Chelsea Finn, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2017
Best Cognitive Robotics Paper Finalist
arXiv / video

We combine an action-conditioned predictive model of images, "visual foresight," with model-predictive control for planning how to push objects. The method is entirely self-supervised, requiring minimal human involvement.

Reset-Free Guided Policy Search: Efficient Deep Reinforcement Learning with Stochastic Initial States
William Montgomery*, Anurag Ajay*, Chelsea Finn, Pieter Abbeel, Sergey Levine
International Conference on Robotics and Automation (ICRA), 2017
arXiv / video / code

We present a new guided policy search algorithm that allows the method to be used in domains where the initial conditions are stochastic, which makes the method more applicable to general reinforcement learning problems and improves generalization performance in our robotic manipulation experiments.

GAN_IRL

A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
Chelsea Finn*, Paul Christiano*, Pieter Abbeel, Sergey Levine
NIPS Workshop on Adversarial Training, 2016
arXiv

We show that a sample-based algorithm for maximum entropy inverse reinforcement learning (MaxEnt IRL) corresponds to a generative adversarial network (GAN) with a particular choice of discriminator. Since MaxEnt IRL is simply an energy-based model (EBM) for behavior, we further show that GANs optimize EBMs with the corresponding discriminator, pointing to a simple and scalable EBM training procedure using GANs.

activelearn

Active One-Shot Learning
Mark Woodward, Chelsea Finn
NIPS Deep Reinforcement Learning Workshop, 2016
arXiv / video description / poster

We propose a technique for learning an active learning strategy by combining one-shot learning and reinforcement learning, and allowing the model to decide, during classification, which examples are worth labeling. Our experiments demonstrate that our model can trade-off accuracy and label requests based on the reward function provided.

Unsupervised Learning for Physical Interaction through Video Prediction
Chelsea Finn, Ian Goodfellow, Sergey Levine
Neural Information Processing Systems (NIPS), 2016
arXiv / videos / data / code

Our video prediction method predicts a transformation to apply to the previous image, rather than pixels values directly, leading to significantly improved multi-frame video prediction. We also introduce a dataset of 50,000 robotic pushing sequences, consisting of over 1 million frames.

Adapting Deep Visuomotor Representations with Weak Pairwise Constraints
Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell
Workshop on the Algorithmic Foundations of Robotics (WAFR), 2016
arXiv

Collecting real-world robotic experience for learning an initial visual representation can be expensive. Instead, we show that it is possible to learn a suitably good initial representation using data collected largely in simulation.

Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
Chelsea Finn, Sergey Levine, Pieter Abbeel
International Conference on Machine Learning (ICML), 2016
Oral presentation at the NIPS 2016 Deep Learning Symposium
arXiv / video results / talk video

We propose an method for Inverse Reinforcement Learning (IRL) that can handle unknown dynamics and scale to flexible, nonlinear cost functions. We evaluate our algorithm on a series of simulated tasks and real-world robotic manipulation problems, including pouring and inserting dishes into a rack.

End-to-End Training of Deep Visuomotor Policies
Sergey Levine*, Chelsea Finn*, Trevor Darrell, Pieter Abbeel
CCC Blue Sky Ideas Award
Journal of Machine Learning Research (JMLR), 2016
arXiv / video / project page / code

We demonstrate a deep neural network trained end-to-end, from perception to controls, for robotic manipulation tasks.

Deep Spatial Autoencoders for Visuomotor Learning
Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel
International Conference on Robotics and Automation (ICRA), 2016
arXiv / video

We learn a lower dimensional visual state-space without supervision using deep spatial autoencoders, and use it to learn nonprehensile manipulation tasks, such as pushing a lego block and scooping a bag into a bowl.

PontTuset

Learning Deep Neural Network Policies with Continuous Memory States
Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel
International Conference on Robotics and Automation (ICRA), 2016
arXiv / video

We propose a method for learning recurrent neural network policies using continuous memory states. The method learns to store information in and use the memory states using trajectory optimization. Our method outperforms vanilla RNN and LSTM baselines.

Bridging text spotting and SLAM with junction features.
Hsueh-Cheng Wang, Chelsea Finn, Liam Paull, Michael Kaess, Ruth Rosenholtz, Seth Teller, John Leonard
International Conference on Intelligent Robots and Systems (IROS), 2015

We develop a method that integrates text-spotting with simultaneous localization and mapping (SLAM), that determines loop closures using text in the environment.

3DSP

Beyond Lowest-Warping Cost Action Selection in Trajectory Transfer
Dylan Hadfield-Menell, Alex X. Lee, Chelsea Finn Eric Tzeng, Sandy Huang, Pieter Abbeel,
International Conference on Robotics and Automation (ICRA), 2015

We consider the problem of selecting which demonstration to transfer to the current test scenario. We frame the problem as an options Markov decision process (MDP) and develop an approach to learn a Q-function from expert demonstrations. Our results show significant improvement over nearest-neighbor selection.

Teaching
teach

CS294-112: Deep Reinforcement Learning - Spring 2017
Co-Instructor

CS188: Introduction to Artificial Intelligence - Spring 2015
Graduate Student Instructor (GSI)

6.S080: Introduction to Inference - Spring 2014
Teaching Assistant (TA)

6.141: Robotics: Science and Systems - Spring 2013
Lab Assistant (LA)

6.02: Digital Communication Systems - Spring 2012
Lab Assistant (LA)


This guy makes a nice webpage.