Nick Rhinehart

CV | Bio | Google Scholar | Github | Twitter

Welcome to my academic website! I'm a Postdoctoral Scholar working with Sergey Levine within the UC Berkeley Artificial Intelligence Research lab.

For systems to be generally intelligent, they must be able to reason about the future—how should we learn, interpret, quantify, and leverage models that reason about the future? Towards this question and others, I work on RL and IL methods at the interface of Computer Vision and Machine Learning. I'm specifically interested in building decision-theoretic models that leverage rich perception sources to inform forecasting and control tasks.

I received a Ph.D. in Robotics working with Kris Kitani at Carnegie Mellon University. I've also worked with Paul Vernaza and Manmohan Chandraker at NEC Labs America, and Drew Bagnell at Uber ATG and Carnegie Mellon. I graduated from Swarthmore College with degrees in CS and Engineering. See here for a more formal bio.


Publications (list)
   The '*' character denotes co-first authorship.

Can Autonomous Vehicles Identify, Recover from, and Adapt to Distribution Shifts?

A. Filos*, P. Tigas*, R. McAllister, N. Rhinehart, Sergey Levine, Yarin Gal

ICML 2020 | pdf | show abs | code


Lorem Ipsum

Unsupervised Sequence Forecasting of 100,000 Points for Unsupervised Trajectory Forecasting

X. Weng, J. Wang, S. Levine, K. Kitani, N. Rhinehart

arXiv 2020 | pdf | show abs | show bib

Mini abstract: "LiDAR streams are a bountiful source of unlabelled data. We learn a model to forecasting LiDAR point cloud sequences several seconds into the future, and find that by running a simple object detector on the result, we match SoTA trajectory forecasting approaches. Our model requires no trajectory labels, in contrast to SoTA trajectory forecasting approaches."


Lorem Ipsum

Generative Hybrid Representations for Activity Forecasting with No-Regret Learning
Oral Presentation, CVPR 2020

J. Guan, Y. Yuan, K. M. Kitani, N. Rhinehart

CVPR 2020 | pdf | show abs | show bib | supp

Mini abstract: "Some activites are best represented discretely, others continuously. We learn a deep likelihood-based generative model to jointly forecast discrete and continuous activities, and show how to tweak the model to learn efficiently online."


Lorem Ipsum

Deep Imitative Models for Flexible Inference, Planning, and Control

N. Rhinehart, R. McAllister, S. Levine

ICLR 2020 | pdf | show abs | show bib | code | project page | talk video

Mini abstract: "We learn a deep conditional distribution of human driving behavior to guide planning and control of an autonomous car in simulation, without any trial-and-error data. We show that the approach can be adapted to execute tasks that were never demonstrated, including safely avoiding potholes, and is robust to misspecified goals that would cause it to violate its model of the rules of the road, and achieve S.O.T.A. on the CARLA benchmark."


Lorem Ipsum

SMiRL: Surprise Minimizing RL in Dynamic Environments

G. Berseth, D. Geng, C. Devin, N. Rhinehart, C. Finn, D. Jayaraman, S. Levine

arXiv 2019 | pdf | show abs | show bib | project page

Mini abstract: "We propose that such a search for order amidst chaos might offer a unifying principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing RL (SMiRL). The resulting agents acquire several proactive behaviors to seek and maintain stable states. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, and control a humanoid to avoid falls, without any task-specific reward supervision."


Lorem Ipsum

PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings

N. Rhinehart, R. McAllister, K. M. Kitani, S. Levine

Best Paper, ICML 2019 Workshop on AI for Autonomous Driving
Oral, Baylearn 2019
ICCV 2019 | pdf | show abs | show bib | project page | code | visualization code | iccv pdf | iccv talk slides (pdf) | Baylearn talk (youtube)

Mini abstract: "We perform deep conditional forecasting with multiple interacting agents: when you control one of them, you can use its goals to better predict what nearby agents will do. The model also outperforms S.O.T.A. methods on the more standard task of unconditional forecasting."


Lorem Ipsum

Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information

M. Sharma*, A. Sharma*, N. Rhinehart, K. M. Kitani

ICLR 2019 | pdf | show abs | show bib | project page

Mini abstract: "Many behaviors are naturally composed of sub-tasks. Our approach learns to imitate behaviors with subtasks by discovering topics of latent behavior to influence its imitation."


Lorem Ipsum

First-Person Activity Forecasting from Video with Online Inverse Reinforcement Learning

N. Rhinehart, K. Kitani

TPAMI 2018 | pdf | show abs | show bib | project page

Mini abstract: "We continuously model and forecast long-term goals of a first-person camera wearer through our Online Inverse RL algorithm. We show our approach learns efficiently continuously in theory and practice."


Lorem Ipsum

R2P2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting

N. Rhinehart, K. M. Kitani, P. Vernaza

ECCV 2018 | pdf | show abs | show bib | project page | supplement | blog post (third-party)

Mini abstract: "We designed an objective to jointly maximize diversity and precision for generative models, and designed a deep autoregressive flow to efficiently optimize this objective for the task of motion forecasting. Unlike many popular generative models, ours can exactly evaluate its probability density function for arbitrary points."


Lorem Ipsum

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

T. Shankar, N. Rhinehart, K. Muelling, K. M. Kitani

CORL 2018 | pdf | show abs | show bib | code

Mini abstract: "We developed and applied a new imitation learning approach for the task of sequential visual parsing."


Lorem Ipsum

Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning

X. Pan, E. Ohn-Bar, N. Rhinehart, Y. Xu, Y. Shen, K. M. Kitani

AAMAS 2018 | pdf | show abs | show bib

Mini abstract: "We analyze the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework and find our approach to require less demonstration data than a baseline Inverse RL approach"


Lorem Ipsum

N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning

A. Ashok, N. Rhinehart, F. Beainy, K. Kitani

ICLR 2018 | pdf | show abs | show bib | code

Mini abstract: "We designed a principled method to perform neural model compression: we trained a compression agent via RL on the sequential task of compressing large networks while maintaining high performance. The compressing agent was able to generalize to compress previously-unseen networks."


Lorem Ipsum

Predictive-State Decoders: Encoding the Future Into Recurrent Neural Networks

A. Venkataraman*, N. Rhinehart*, W. Sun, L. Pinto, M. Hebert, B. Boots, K. Kitani, J. A. Bagnell

NIPS 2017 | pdf | show abs | show bib

Mini abstract: "We use the idea of Predictive State Representations to guide learning of RNNs: by encouraging the hidden-state of the RNN to be predictive of future observations, we found it to improve RNN performance on various tasks in probabilistic filtering, imitation learning, and reinforcement learning."


Lorem Ipsum

First-Person Activity Forecasting with Online Inverse Reinforcement Learning

N. Rhinehart, K. Kitani

Best Paper Honorable Mention (3 of 2,143 submissions), ICCV 2017
ICCV 2017 | pdf | show abs | show bib | project page | code

Mini abstract: "We continuously model and forecast long-term goals of a first-person camera wearer through our Online Inverse RL algorithm. In contrast to motion forecasting, our approach reasons about semantic states and future goals that are potentially far away in space and time."


Lorem Ipsum
Learning Action Maps of Large Environments Via First-Person Vision

N. Rhinehart, K. Kitani

CVPR 2016 | pdf | show abs | show bib

Mini abstract: "We developed an approach that learns to associate visual cues associated with sparse behaviors to make dense predictions of functionality in seen and unseen environments."


		  author = {Rhinehart, Nicholas and Kitani, Kris M.},
		  title = {Learning Action Maps of Large Environments via First-Person Vision},
		  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
		  month = {June},
		  year = {2016}

Visual Chunking: A List Prediction Framework for Region-Based Object Detection

N. Rhinehart, J. Zhou, M. Hebert, J. A. Bagnell

ICRA 2015 | pdf | show abs | show bib

Mini abstract: "We developed a principled imitation learning approach for the task of object detection, which is best described as a sequence prediction problem. Our approach reasons sequentially about objects, and requires no heuristics, such as Non-Maxima Suppression, to filter its predictions that are common in object detection frameworks."


		  title={Visual chunking: A list prediction framework for region-based object detection},
		  author={Rhinehart, Nicholas and Zhou, Jiaji and Hebert, Martial and Bagnell, J Andrew},
		  booktitle={Robotics and Automation (ICRA), 2015 IEEE International Conference on},

Unrefereed Work

Flight Autonomy in Obstacle-Dense Environments

N. Rhinehart, D. Dey, J. A. Bagnell

Robotics Institute Summer Scholars Symposium, August 2011;
Sigma-Xi Research Symposium, October, 2011 poster (pdf) | youtube

Fast SFM-Based Localization of Temporal Sequences and Ground-Plane Hypothesis Consensus

Project for 16-822 Geometry Based Methods in Computer Vision, May, 2015

pdf | video (mp4)
Online Anomaly Detection in Video

Project for 16-831 Statistical Techniques in Robotics, December, 2014

Autonomous Localization and Navigation of Humanoid Robot

Swarthmore College Senior Thesis Project, May, 2012

© 2012-2020 Nick Rhinehart