Jianlan Luo


Email: jianlanluo [at] eecs [dot] berkeley [dot] edu

I am a Postdoctoral Scholar at the Berkeley Artificial Intelligence Research (BAIR) Lab working with Prof. Sergey Levine. Before moving back full-time to academia in 2022, I spent two years as a researcher at Google [X] working with Prof. Stefan Schaal. I received my MS/Ph.D. from UC Berkeley in 2020. I have also spent time at Deepmind, Everyday Robots.


Google Scholar   |   Twitter   |   GitHub

News


Research

My research vision centers on enabling robust long-term autonomy for complex real-world systems such as industrial or household robots; drawing on tools and concepts from machine learning, controls, and robotics. Towards that end, my research agenda is focused on developing algorithms and establishing foundational principles for the design of high-performance, learning-based robotic systems, with the ultimate goal of deploying them successfully in complex, real-world environments.

Selected Publications [ Full List ]


Octo: An Open-Source Generalist Robot Policy
Octo Model Team
ArXiv Preprint
Paper | Code

SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning
Jianlan Luo*, Zheyuan Hu*, Charles Xu, Siri Gadipudi, Archit Sharma, Rehaan Ahmad, Stefan Schaal, Chelsea Finn, Abhishek Gupta, Sergey Levine
International Conference on Robotics and Automation (ICRA) 2024
arXiv | Video | Code | Media Coverage

FMB: A Functional Manipulation Benchmark for Generalizable Robotic Learning
Jianlan Luo*, Charles Xu*, Fangchen Liu, Liam Tan, Zipeng Lin, Jeffrey Wu, Pieter Abbeel, Sergey Levine
Under review at International Journal of Robotics Research (IJRR)
arXiv | Video | Data

RLIF: Interactive Imitation Learning as Reinforcement Learning
Jianlan Luo*, Perry Dong*, Yuexiang Zhai, Yi Ma, Sergey Levine
International Conference on Learning Representations (ICLR) 2024
arXiv | Video | Code | Media Coverage 1, 2, 3, 4

Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment Collaboration
International Conference on Robotics and Automation (ICRA) 2024
arXiv | Blog Post | Dataset

Multi-Stage Cable Routing through Hierarchical Imitation Learning
Jianlan Luo*, Charles Xu*, Xinyang Geng, Gilbert Feng, Kuan Fang, Liam Tan, Stefan Schaal, Sergey Levine
IEEE Transactions on Robotics (T-RO) 2024
arXiv | T-RO version | Video | Code | Dataset | Data in tfds

Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning
Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine
Conference on Robot Learning (CoRL) 2023
arXiv | Code

REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation
Zheyuan Hu*, Aaron Rovinsky*, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine
Conference on Robot Learning (CoRL) 2023
arXiv | Video

Offline Meta-Reinforcement Learning for Industrial Insertion
Tony Z. Zhao*, Jianlan Luo*, Oleg Sushkov, Rugile Pevceviciute, Nicolas Heess, Jon Scholz, Stefan Schaal, Sergey Levine
International Conference on Robotics and Automation (ICRA) 2022
arXiv | Video | Media Coverage

Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learning and Demonstrations: A Large-Scale Study
Jianlan Luo*, Oleg Sushkov*, Rugile Pevceviciute*, Wenzhao Lian, Chang Su, Mel Vecerik, Stefan Schaal, Jon Scholz
Robotics: Science and Systems (RSS) 2021
arXiv | Video | Media Coverage

Action-Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bingham, Mrinal Kalakrishnan
International Conference on Robotics and Automation (ICRA) 2020
arXiv

Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly
Jianlan Luo, Eugen Solowjow, Chengtao Wen, Juan Aparicio Ojea, Alice M Agogino, Aviv Tamar, Pieter Abbeel
International Conference on Robotics and Automation (ICRA) 2019
arXiv | Video | Media Coverage

Website template courtesy