Nathan O. Lambert
Work   Writing   Learning   Life

I am a PhD Candidate at the University of California, Berkeley, Department of Electrical Engineering and Computer Sciences. I have the pleasure of being advised by Professor Kristofer Pister in the Berkeley Autonomous Microsystems Lab. For the summer of 2019, I had the pleasure to be working with Roberto Calandra at Facebook AI Research, which is now a continuing collaboration! I completed my undergraduate education at Cornell University's College of Engineering in 2017. While there, I worked with the Lab of Plasma Studies and SonicMEMs Lab.

And links to external materials.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github  /  Blog

I am looking to recruit undergrads from underrepresented groups - if you consider yourself to fall into this and are interested in my work, email me directly.

I'm interested in the intersection of machine learning and control, with applications to experimental robotics. With Kris, I am working on direct synthesis of robot controllers with model-based reinforcement learning where we do not need any past system knowledge. For an overview of my recent work, you can find a shortened version of my qualifying exam slides here, or a private recording here.

My high level interests.
  1. Novel Robotics: I want to be able to build useful robots from whatever pieces an engineer has.

  2. Model-based Reinforcement Learning: I am optimistic about interpretable learning for Locomotion of robots.

  3. Robot Learning in Weak-sensor Environments: As a practical roboticist (or a data-scientist), I want to make systems that work in all parts of the world.

Representative Publications

Objective Mismatch in Model-based Reinforcement Learning
Nathan Lambert, Brandon Amos, Omry Yadan, Roberto Calandra
Learning for Decision and Control, 2020.
Paper   /  Workshop Presentation   /  More

Studying the numerical effects of a dual-optimization problem in model-based reinforcement learning. When optimizing model accuracy, there is no guarantee on improving task performance!

Low Level Control of a Quadrotor with Deep Model-Based Reinforcement Learning
Nathan Lambert, Daniel Drew, Joseph Yaconelli, Roberto Calandra, Sergey Levine, Kristofer Pister
IEEE Robotics and Automation Letters (RA-L), 2019.
Paper  /  website

We used deep model-based reinforcement learning to have a quadrotor learn to hover from less than 7 minutes of all experimental training data. No system knowledge was needed for these experiment, reading raw sensor values and commanding motor PWMs.

Toward Controlled Flight of the Ionocraft: A Flying Microrobot Using Electrohydrodynamic Thrust With Onboard Sensing and No Moving Parts
Daniel Drew, Nathan Lambert, Craig Schindler, Kris Pister
IEEE Robotics and Automation Letters (RA-L), 2018.
Paper  /  website

A collection of steps towards controlled flight of The Ionocraft, a completely silent microrobot with ion thrust!

Last updated 10 June 2020, this guy makes a nice website.