Sergey Levine


Assistant Professor, UC Berkeley, EECS
Address:
754 Sutardja Dai Hall
UC Berkeley
Berkeley, CA 94720-1758
Email:
prospective students: please read this before contacting me.

[News][Research][Lab Members][Sponsors]

I am an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. In my research, I focus on the intersection between control and machine learning, with the aim of developing algorithms and techniques that can endow machines with the ability to autonomously acquire the skills for executing complex tasks. In particular, I am interested in how learning can be used to acquire complex behavioral skills, in order to endow machines with greater autonomy and intelligence. To see a more formal biography, click here.

News and Announcements

News

July 21, 2017 Google Research Blog article about our robotics work posted!
July 15, 2017 Five new preprints on deep robotic learning posted!
July 12, 2017 Our IJRR article extending the work on self-supervised grasping is now available, including a new experiment with a million additional grasps and transfer between two types of robots.
June 14, 2017 Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search accepted at IROS 2017!
June 1, 2017 Four papers accepted at ICML 2017: Modular Multitask Reinforcement Learning with Policy Sketches, Reinforcement Learning with Deep Energy-Based Policies, Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning and Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.
April 28, 2017 Two papers accepted at RSS 2017: (CAD)2RL: Real Single-Image Flight without a Single Real Image and Unsupervised Perceptual Rewards for Imitation Learning.
April 9, 2017 Two new papers posted: Goal-Driven Dynamics Learning via Bayesian Optimization and Learning Visual Servoing with Deep Features and Fitted Q-Iteration (accepted to ICLR 2017).
March 14, 2017 Six new papers on deep reinforcement learning posted: Uncertainty-Aware Reinforcement Learning, Model-Agnostic Meta-Learning, Reinforcement Learning with Deep Energy-Based Policies, Exploration with Exemplar Models, Combining Model-Based and Model-Free Updates, and Learning Invariant Feature Spaces (accepted to ICLR 2017).
March 1, 2017 Two new papers on deep robotic learning posted: Cognitive Mapping, accepted at CVPR 2017, and Rope Manipulation, accepted at ICRA 2017.
February 17, 2017 Videos of the lectures from our NIPS 2016 Workshop on Deep Learning for Action and Interaction are posted here.
February 10, 2017 Five papers accepted at the International Conference on Learning Representations (ICLR), including one oral presentation! Three are available below, the rest are coming soon.
January 21, 2017 Nine papers accepted at the International Conference on Robotics and Automation (ICRA)! Eight available below, the ninth is coming soon.
January 14, 2017 Two new preprints on deep reinforcement learning posted!
[Show/hide older news]

Upcoming Events and Talks from Lab Members

July 29, 2017 SCA 2017: talk by Sergey Levine
August 6, 2017 ICML 2017 Tutorial on Deep Reinforcement Learning: Sergey Levine and Chelsea Finn
August 7, 2017 ICML 2017: presentation by Chelsea Finn
August 8, 2017 ICML 2017: presentation by Marvin Zhang and collaborators

Recent Talk (2017)

This is a talk from April 2017 (at the CMU RI Seminar Series) summarizing some of the work in my group. An older talk from 2015, focusing primarily on guided policy search, is available here.

Representative Publications

These recent papers provide an overview of my research, including: large scale robotic learning, deep reinforcement learning algorithms, and deep learning of robotic sensorimotor skills.

Deep Visual Foresight for Planning Robot Motion.
Chelsea Finn, Sergey Levine. ICRA 2017.
[PDF] [Video] [arXiv]
In this paper, we present a method for using video prediction models to plan and execute robotic manipulation skills. Specifically, we show that nonprehensile pushing behaviors can be generated automatically using a deep neural network video prediction model trained in a self-supervised manner using a large dataset of automatically generated robotic pushes. Control is performed by optimizing for actions for which the model predicts the desired outcome, which is specified by a user command. The results show that learned video prediction models perform nontrivial reasoning about physical interactions, and allow basic pushing skills to be executed with minimal manual engineering and no prior knowledge about physics or objects.
Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates.
Shixiang Gu*, Ethan Holly*, Timothy Lillicrap, Sergey Levine. ICRA 2017.
[PDF] [Video] [arXiv]
In this work, we explore how deep reinforcement learning methods based on normalized advantage functions (NAF) can be used to learn real-world robotic manipulation skills, with multiple robots simultaneously pooling their experiences. Our results show that we can obtain faster training and, in some cases, converge to a better solution when training on multiple robots, and we show that we can learn a real-world door opening skill with deep neural network policies using about 2.5 hours of total training time with two robots.
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection.
Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, Deirdre Quillen. IJRR, 2017.
[PDF (IJRR)] [Video] [Google Research Blog] [Data]
This paper presents an approach for learning grasping with continuous servoing by using large-scale data collection on a cluster of up to 14 individual robots. We collected about 800,000 grasp attempts, which we used to train a large convolutional neural network to predict grasp success given an image and a candidate grasp vector. We then construct a continuous servoing mechanism that uses this network to continuously make decisions about the optimal motor command to maximize the probability of grasp success. We evaluate our approach by grasping objects that were not seen at training time, and compare to an open-loop variant that does not perform continuous feedback control.
End-to-End Training of Deep Visuomotor Policies.
Sergey Levine*, Chelsea Finn*, Trevor Darrell, Pieter Abbeel. JMLR 17, 2016.
[PDF] [Video] [arXiv] [Code]
This paper presents a method for training visuomotor policies that perform both vision and control for robotic manipulation tasks. The policies are represented by deep convolutional neural networks with about 92,000 parameters. By learning to perform vision and control together, the vision system can adapt to the goals of the task, essentially performing goal-driven perception. Experimental results on a PR2 robot show that this method achieves substantial improvements in the accuracy of the final policy.

Recent Preprints

Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation.
YuXuan Liu, Abhishek Gupta, Pieter Abbeel, Sergey Levine. arXiv 1707.03374.
[Overview] [PDF] [Video] [arXiv]
Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-to-End Learning from Demonstration.
Rouhollah Rahmatizadeh, Pooya Abolghasemi, Ladislau Boloni, Sergey Levine. arXiv 1707.02920.
[Overview] [PDF] [Video] [arXiv]
End-to-End Learning of Semantic Grasping.
Eric Jang, Sudheendra Vijaynarasimhan, Peter Pastor, Julian Ibarz, Sergey Levine. arXiv 1707.01932.
[Overview] [PDF] [Video] [arXiv]
Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard Turner, Bernhard Scholkopf, Sergey Levine. arXiv 1706.00387.
[Overview] [PDF] [arXiv]
Time-Constrastive Networks: Self-Supervised Learning from Multi-View Observation.
Pierre Sermanet*, Corey Lynch*, Jasmine Hsu, Sergey Levine. arXiv 1704.06888.
[Overview] [PDF] [Video] [arXiv]
EX2: Exploration with Exemplar Models for Deep Reinforcement Learning.
Justin Fu*, John D. Co-Reyes*, Sergey Levine. arXiv 1703.01260.
[Overview] [PDF] [Video] [arXiv]
Uncertainty-Aware Reinforcement Learning for Collision Avoidance.
Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, Sergey Levine. arXiv 1702.01182.
[Overview] [PDF] [Video] [arXiv]
Learning Dexterous Manipulation Policies from Experience and Imitation.
Vikash Kumar, Abhishek Gupta, Emanuel Todorov, Sergey Levine. arXiv 1611.05095.
[Overview] [PDF] [Video] [arXiv]

All Papers and Articles

2017

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.
Chelsea Finn, Pieter Abbeel, Sergey Levine. ICML 2017.
[Overview] [PDF] [Video] [arXiv]
Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning.
Yevgen Chebotar*, Karol Hausman*, Marvin Zhang*, Gaurav Sukhatme, Stefan Schaal, Sergey Levine. ICML 2017.
[Overview] [PDF] [Video] [arXiv]
Reinforcement Learning with Deep Energy-Based Policies.
Tuomas Haarnoja*, Haoran Tang*, Pieter Abbeel, Sergey Levine. ICML 2017.
[Overview] [PDF] [Video] [arXiv]
Modular Multitask Reinforcement Learning with Policy Sketches.
Jacob Andreas, Dan Klein, Sergey Levine. ICML 2017.
[Overview] [PDF] [arXiv]
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection.
Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, Deirdre Quillen. IJRR, 2017.
[Overview] [PDF (IJRR)]
Goal-Driven Dynamics Learning via Bayesian Optimization.
Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, Claire J. Tomlin. CDC 2017.
[Overview] [PDF] [arXiv]
Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search.
Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, Sergey Levine. IROS 2017.
[Overview] [PDF] [Video] [arXiv]
Unsupervised Perceptual Rewards for Imitation Learning.
Pierre Sermanet, Kelvin Xu, Sergey Levine. RSS 2017.
[Overview] [PDF] [arXiv]
(CAD)2RL: Real Single-Image Flight without a Single Real Image.
Fereshteh Sadeghi, Sergey Levine. RSS 2017.
[Overview] [PDF] [Video] [arXiv]
Cognitive Mapping and Planning for Visual Navigation
Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik. CVPR 2017.
[Overview] [PDF] [Video] [arXiv]
Learning Visual Servoing with Deep Features and Fitted Q-Iteration.
Alex X. Lee, Sergey Levine, Pieter Abbeel. ICLR 2017.
[Overview] [PDF] [Video] [arXiv]
Learning Invariant Feature Spaces to Transfer Skills with Reinforcement Learning.
Abhishek Gupta*, Coline Devin*, Yu Xuan Liu, Pieter Abbeel, Sergey Levine. ICLR 2017.
[Overview] [PDF] [Video] [arXiv]
Generalizing Skills with Semi-Supervised Reinforcement Learning.
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine. ICLR 2017.
[Overview] [PDF] [Video] [arXiv]
Q-Prop: Sample-Efficient Policy Gradient with an Off-Policy Critic.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard Turner, Sergey Levine. ICLR 2017.
[Overview] [PDF] [arXiv]
EPOpt: Learning Robust Neural Network Policies Using Model Ensembles.
Aravind Rajeswaran, Sarvjeet Ghotra, Sergey Levine, Balaraman Ravindran. ICLR 2017.
[Overview] [PDF] [Video] [arXiv]
Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation.
Ashvin Nair, Dian Chen, Pulkit Agrawal, Phillip Isola, Pieter Abbeel, Jitendra Malik, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Deep Visual Foresight for Planning Robot Motion.
Chelsea Finn, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Reset-Free Guided Policy Search: Efficient Deep Reinforcement Learning with Stochastic Initial States.
William Montgomery*, Anurag Ajay*, Chelsea Finn, Pieter Abbeel, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates.
Shixiang Gu*, Ethan Holly*, Timothy Lillicrap, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Path Integral Guided Policy Search.
Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Deep Reinforcement Learning for Tensegrity Robot Locomotion.
Xinyang Geng*, Marvin Zhang*, Jonathan Bruce*, Ken Caluwaerts, Massimo Vespignani, Vytas SunSpiral, Pieter Abbeel, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer.
Coline Devin*, Abhishek Gupta*, Trevor Darrell, Pieter Abbeel, Sergey Levine. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
Learning from the Hindsight Plan - Episodic MPC Improvement.
Aviv Tamar, Garrett Thomas, Tianhao Zhang, Sergey Levine, Pieter Abbeel. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]
PLATO: Policy Learning using Adaptive Trajectory Optimization.
Gregory Kahn, Tianhao Zhang, Sergey Levine, Pieter Abbeel. ICRA 2017.
[Overview] [PDF] [Video] [arXiv]

2016

A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models.
Chelsea Finn*, Paul Christiano*, Pieter Abbeel, Sergey Levine. NIPS 2016 Workshop on Adversarial Training.
[Overview] [PDF] [arXiv]
Towards Adapting Deep Visuomotor Representations from Simulated to Real Environments.
Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, Trevor Darrell. WAFR 2016.
[Overview] [PDF] [arXiv]
Guided Policy Search as Approximate Mirror Descent.
William Montgomery, Sergey Levine. NIPS 2016.
[Overview] [PDF] [arXiv] [Code]
Learning to Poke by Poking: Experiential Learning of Intuitive Physics.
Pulkit Agrawal*, Ashvin Nair*, Pieter Abbeel, Jitendra Malik, Sergey Levine. NIPS 2016.
[Overview] [PDF] [Video] [arXiv]
Unsupervised Learning for Physical Interaction through Video Prediction.
Chelsea Finn, Ian Goodfellow, Sergey Levine. NIPS 2016.
[Overview] [PDF] [Video] [arXiv] [Data] [Code]
Backprop KF: Learning Discriminative Deterministic State Estimators.
Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel. NIPS 2016.
[Overview] [PDF] [arXiv]
Value Iteration Networks.
Aviv Tamar, Sergey Levine, Pieter Abbeel. NIPS 2016.
[Overview] [PDF] [arXiv]
Learning Dexterous Manipulation for a Soft Robotic Hand from Human Demonstration.
Abhishek Gupta, Clemens Eppner, Sergey Levine, Pieter Abbeel. IROS 2016.
[Overview] [PDF] [Video] [arXiv]
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors.
Justin Fu, Sergey Levine, Pieter Abbeel. IROS 2016.
[Overview] [PDF] [Video] [arXiv]
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection.
Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen. ISER 2016.
[Overview] [PDF (extended)] [PDF (ISER)] [Video] [arXiv (extended)] [Google Research Blog] [Data]
End-to-End Training of Deep Visuomotor Policies.
Sergey Levine*, Chelsea Finn*, Trevor Darrell, Pieter Abbeel. JMLR 17, 2016.
[Overview] [PDF] [Video] [arXiv] [Code]
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization.
Chelsea Finn, Sergey Levine, Pieter Abbeel. ICML 2016.
[Overview] [PDF] [Video] [arXiv]
Continuous Deep Q-Learning with Model-based Acceleration.
Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine. ICML 2016.
[Overview] [PDF] [arXiv]
MuProp: Unbiased Backpropagation for Stochastic Neural Networks.
Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih. ICLR 2016.
[Overview] [PDF] [arXiv]
Learning Visual Predictive Models of Physics for Playing Billiards.
Katerina Fragkiadaki*, Pulkit Agrawal*, Sergey Levine, Jitendra Malik. ICLR 2016.
[Overview] [PDF] [arXiv]
High-Dimensional Continuous Control Using Generalized Advantage Estimation.
John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel. ICLR 2016.
[Overview] [PDF] [arXiv]
Deep Spatial Autoencoders for Visuomotor Learning.
Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel. ICRA 2016.
[Overview] [PDF] [Video] [arXiv]
Learning Deep Control Policies for Autonomous Aerial Vehicles with MPC-Guided Policy Search.
Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel. ICRA 2016.
[Overview] [PDF] [Video] [arXiv]
Optimal Control with Learned Local Models: Application to Dexterous Manipulation.
Vikash Kumar, Emanuel Todorov, Sergey Levine. ICRA 2016.
[Overview] [PDF] [Video]
Model-Based Reinforcement Learning with Parametrized Physical Models and Optimism-Driven Exploration.
Christopher Xie, Sachin Patil, Teodor Moldovan, Sergey Levine, Pieter Abbeel. ICRA 2016.
[Overview] [PDF] [arXiv]
Learning Deep Neural Network Policies with Continuous Memory States.
Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel. ICRA 2016.
[Overview] [PDF] [arXiv]

2015

Recurrent Network Models for Human Dynamics.
Katerina Fragkiadaki, Sergey Levine, Panna Felsen, Jitendra Malik. ICCV 2015.
[Overview] [PDF] [Video] [arXiv]
Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models.
Bradly C. Stadie, Sergey Levine, Pieter Abbeel. arXiv 1507.00814. 2015.
[Overview] [PDF] [arXiv]
Learning Compound Multi-Step Controllers under Unknown Dynamics.
Weiqiao Han, Sergey Levine, Pieter Abbeel. IROS 2015.
[Overview] [PDF] [Video]
Learning from Multiple Demonstrations using Trajectory-Aware Non-Rigid Registration with Applications to Deformable Object Manipulation.
Alex X. Lee, Abhishek Gupta, Henry Lu, Sergey Levine, Pieter Abbeel. IROS 2015.
[Overview] [PDF] [Video]
Trust Region Policy Optimization.
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel. ICML 2015.
[Overview] [PDF] [Video] [arXiv]
Learning Contact-Rich Manipulation Skills with Guided Policy Search.
Sergey Levine, Nolan Wagener, Pieter Abbeel. ICRA 2015.
[Overview] [PDF] [Video]
Learning Force-Based Manipulation of Deformable Objects from Multiple Demonstrations.
Alex X. Lee, Henry Lu, Abhishek Gupta, Sergey Levine, Pieter Abbeel. ICRA 2015.
[Overview] [PDF] [Video]
Optimism-Driven Exploration for Nonlinear Systems.
Teodor Mihai Moldovan, Sergey Levine, Michael I. Jordan, Pieter Abbeel. ICRA 2015.
[Overview] [PDF]

2014

Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics.
Sergey Levine, Pieter Abbeel. NIPS 2014.
[Overview] [PDF] [Video]
Learning Complex Neural Network Policies with Trajectory Optimization.
Sergey Levine, Vladlen Koltun. ICML 2014.
[Overview] [PDF] [Video]
Motor Skill Learning with Local Trajectory Methods.
Sergey Levine. Ph.D. thesis, Stanford University, 2014.
[Overview] [PDF]
Offline Policy Evaluation Across Representations with Applications to Educational Games.
Travis Mandel, Yun-En Liu, Sergey Levine, Emma Brunskill, Zoran Popović. AAMAS 2014.
[Overview] [PDF] [Website]

2013

Exploring Deep and Recurrent Architectures for Optimal Control.
Sergey Levine. NIPS Workshop on Deep Learning 2013.
[Overview] [PDF]
Variational Policy Search via Trajectory Optimization.
Sergey Levine, Vladlen Koltun. NIPS 2013.
[Overview] [PDF]
Inverse Optimal Control for Humanoid Locomotion.
Taesung Park, Sergey Levine. RSS Workshop on Inverse Optimal Control & Robotic Learning from Demonstration, 2013.
[Overview] [PDF]
Guided Policy Search.
Sergey Levine, Vladlen Koltun. ICML 2013.
[Overview] [PDF] [Video]

2012

Continuous Inverse Optimal Control with Locally Optimal Examples.
Sergey Levine, Vladlen Koltun. ICML 2012.
[Overview] [PDF] [Video/Code]
Continuous Character Control with Low-Dimensional Embeddings.
Sergey Levine, Jack M. Wang, Alexis Haraux, Zoran Popović, Vladlen Koltun. ACM SIGGRAPH 2012.
[Overview] [PDF] [Video/Code]
Physically Plausible Simulation for Character Animation.
Sergey Levine, Jovan Popović. SCA 2012.
[Overview] [PDF] [Video]

2011

Nonlinear Inverse Reinforcement Learning with Gaussian Processes.
Sergey Levine, Zoran Popović, Vladlen Koltun. NIPS 2011.
[Overview] [PDF] [Poster] [Video/Code]
Space-Time Planning with Parameterized Locomotion Controllers.
Sergey Levine, Yongjoon Lee, Vladlen Koltun, Zoran Popović. ACM Transactions on Graphics 30 (3). 2011.
[Overview] [PDF] [Video]

2010

Feature Construction for Inverse Reinforcement Learning.
Sergey Levine, Zoran Popović, Vladlen Koltun. NIPS 2010.
[Overview] [PDF] [Poster] [Website]
Gesture Controllers.
Sergey Levine, Philipp Krähenbühl, Sebastian Thrun, Vladlen Koltun. ACM SIGGRAPH 2010.
[Overview] [PDF] [Video]

2009

Real-Time Prosody-Driven Synthesis of Body Language.
Sergey Levine, Christian Theobalt, Vladlen Koltun. ACM SIGGRAPH Asia 2009.
[Overview] [PDF] [Video]
Modeling Body Language from Speech in Natural Conversation.
Sergey Levine. Master's research report, Stanford University, 2009.
[Overview] [PDF] [Video]
Body Language Animation Synthesis from Prosody.
Sergey Levine. Undergraduate thesis, Stanford University, 2009.
[Overview] [PDF] [Video]

Research Group

Roberto Calandra Postdoc
JD Co-Reyes Ph.D. student
Coline Devin Ph.D. student
Chelsea Finn Ph.D. student
Justin Fu Ph.D. student
Abhishek Gupta Ph.D. student
Gregory Kahn Ph.D. student
Alex X. Lee Ph.D. student
William Montgomery Ph.D. student
Vitchyr Pong Ph.D. student
Kate Rakelly Ph.D. student
Fereshteh Sadeghi Ph.D. student
Avi Singh Ph.D. student
Marvin Zhang Ph.D. student
Frederik Ebert Visiting MS student
Kurtland Chua Undergraduate student
Jasmine Deng Undergraduate student
Justin Lin Undergraduate student
Russell Mendonca Undergraduate student
Soroush Nasiriany Undergraduate student
Larry Yang Undergraduate student
Aurick Zhou Undergraduate student

Research Support

National Science Foundation, 2016 - present
Google, 2016 - present
Honda, 2017 - present
Berkeley DeepDrive (BDD), 2016 - present
Office of Naval Research, 2016 - present
NVIDIA, 2016 - present
Siemens, 2017 - present
Berkeley AI Research (BAIR), 2016 - present
© 2009-2016 Sergey Levine.