Publications

* denotes equal contribution and joint lead authorship.


2023

  1. Diagnosing and Augmenting Feature Representations in Correctional Inverse Reinforcement Learning

    Conference on Decision and Control (CDC), 2023

    Robots have been increasingly better at doing tasks for humans by learning from their feedback, but still often suffer from model misalignment due to missing or incorrectly learned features. When the features the robot needs to learn to perform its task are missing or do not generalize well to new settings, the robot will not be able to learn the task the human wants and, even worse, may learn a completely different and undesired behavior. Prior work shows how the robot can detect when its representation is missing some feature and can, thus, ask the human to be taught about the new feature; however, these works do not differentiate between features that are completely missing and those that exist but do not generalize to new environments. In the latter case, the robot would detect misalignment and simply learn a new feature, leading to an arbitrarily growing feature representation that can, in turn, lead to spurious correlations and incorrect learning down the line. In this work, we propose separating the two sources of misalignment: we propose a framework for determining whether a feature the robot needs is incorrectly learned and does not generalize to new environment setups vs. is entirely missing from the robot's representation. Once we detect the source of error, we show how the human can initiate the realignment process for the model: if the feature is missing, we follow prior work for learning new features; however, if the feature exists but does not generalize, we use data augmentation to expand its training and, thus, complete the correction. We demonstrate the proposed approach in experiments with a simulated 7DoF robot manipulator and physical human corrections.
  2. Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation

    International Conference on Machine Learning (ICML), 2023

    Policies often fail at test-time due to distribution shifts---changes in the state and reward that occur when an end user deploys the policy in environments different from those seen in training. Data augmentation can help models be more robust to such shifts by varying specific concepts in the state, e.g. object color, that are task-irrelevant and should not impact desired actions. However, designers training the agent don't often know which concepts are irrelevant a priori. We propose a human-in-the-loop framework to leverage feedback from the end user to quickly identify and augment task-irrelevant visual state concepts. Our framework generates counterfactual demonstrations that allow users to quickly isolate shifted state concepts and identify if they should not impact the desired task, and can therefore be augmented using existing actions. We present experiments validating our full pipeline on discrete and continuous control tasks with real human users. Our method better enables users to (1) understand agent failure, (2) improve sample efficiency of demonstrations required for finetuning, and (3) adapt the agent to their desired reward.
  3. SIRL: Similarity-based Implicit Representation Learning

    ACM/IEEE International Conference on Human Robot Interaction (HRI), 2023

    When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task "features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.

2022

  1. Time-Efficient Reward Learning via Visually Assisted Cluster Ranking

    Workshop on Workshop on Human in the Loop Learning, NeurIPS 2022

    One of the most successful paradigms for reward learning uses human feedback in the form of comparisons. Although these methods hold promise, human comparison labeling is expensive and time consuming, constituting a major bottleneck to their broader applicability. Our insight is that we can greatly improve how effectively human time is used in these approaches by batching comparisons together, rather than having the human label each comparison individually. To do so, we leverage data dimensionality-reduction and visualization techniques to provide the human with a interactive GUI displaying the state space, in which the user can label subportions of the state space. Across some simple Mujoco tasks, we show that this high-level approach holds promise and is able to greatly increase the performance of the resulting agents, provided the same amount of human labeling time.
  2. Aligning Robot Representations with Humans
    Andreea Bobu and Andi Peng.

    Workshop on Collaborative Robots and the Work of the Future, ICRA 2022

    As robots are increasingly deployed in real-world scenarios, a key question is how to best transfer knowledge learned in one environment to another, where shifting constraints and human preferences render adaptation challenging. A central challenge remains that often, it is difficult (perhaps even impossible) to capture the full complexity of the deployment environment, and therefore the desired tasks, at training time. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform in one environment may be misaligned with the representation of the tasks that the robot has learned in another. We postulate that because humans will be the ultimate evaluator of system success in the world, they are best suited to communicating the aspects of the tasks that matter to the robot. Our key insight is that effective learning from human input requires first explicitly learning good intermediate representations and then using those representations for solving downstream tasks. We highlight three areas where we can use this approach to build interactive systems and offer future directions of work to better create advanced collaborative robots.
  3. Teaching Robots to Span the Space of Functional Expressive Motion

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022

    Our goal is to enable robots to perform functional tasks in emotive ways, be it in response to their users' emotional states, or expressive of their confidence levels. Prior work has proposed learning independent cost functions from user feedback for each target emotion, so that the robot may optimize it alongside task and environment specific objectives for any situation it encounters. However, this approach is inefficient when modeling multiple emotions and unable to generalize to new ones. In this work, we leverage the fact that emotions are not independent of each other: they are related through a latent space of Valence-Arousal-Dominance (VAD). Our key idea is to learn a model for how trajectories map onto VAD with user labels. Considering the distance between a trajectory's mapping and a target VAD allows this single model to represent cost functions for all emotions. As a result 1) all user feedback can contribute to learning about every emotion; 2) the robot can generate trajectories for any emotion in the space instead of only a few predefined ones; and 3) the robot can respond emotively to user-generated natural language by mapping it to a target VAD. We introduce a method that interactively learns to map trajectories to this latent space and test it in simulation and in a user study. In experiments, we use a simple vacuum robot as well as the Cassie biped.
  4. Learning Perceptual Concepts by Bootstrapping from Human Queries

    IEEE Robotics and Automation Letters (RA-L), 2022

    Most robot tasks can be thought of as relating one or more objects, and learning new tasks by necessity involves teaching the robot new concepts relating objects to one another. However, learning new concepts that operate on high-dimensional data – like that coming from a robot’s sensors – is impractical, because it requires an unrealistic amount of labeled human input. In this work, we observe that by using a simulator at training time we can get access to significant privileged information – things like object poses and bounding boxes – that allows for learning a low-dimensional variant of the concept with much less human input. The robot can then use this low-dimensional concept to automatically label large amounts of high-dimensional data in the simulator. This enables learning perceptual concepts that work with real sensor input where no privileged information is available. We evaluate our Perceptual Concept Bootstrapping (PCB) approach by learning spatial concepts that describe object state or multi-object relationships. We show that our approach improves sample complexity when compared to learning concepts directly in the high-dimensional space. We also demonstrate the utility of the learned concepts in motion planning tasks on a 7-DoF Franka Panda robot.
  5. Inducing Structure in Reward Learning by Learning Features

    The International Journal of Robotics Research (IJRR)

    Reward learning enables robots to learn adaptable behaviors from human input. Traditional methods model the reward as a linear function of hand-crafted features, but that requires specifying all the relevant features a priori, which is impossible for real-world tasks. To get around this issue, recent deep Inverse Reinforcement Learning (IRL) methods learn rewards directly from the raw state but this is challenging because the robot has to implicitly learn the features that are important and how to combine them, simultaneously. Instead, we propose a divide and conquer approach: focus human input specifically on learning the features separately, and only then learn how to combine them into a reward. We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space. The robot can then learn how to combine them into a reward using demonstrations, corrections, or other reward learning frameworks. We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known. By first focusing human input specifically on the feature(s), our method decreases sample complexity and improves generalization of the learned reward over a deepIRL baseline. We show this in experiments with a physical 7DOF robot manipulator, as well as in a user study conducted in a simulated environment.

2021

  1. Situational Confidence Assistance for Lifelong Shared Autonomy
    Matthew Zurek*, Andreea Bobu*, Daniel S. Brown, and Anca D. Dragan.

    International Conference on Robot Learning (ICRA), 2021

    Shared autonomy enables robots to infer user intent and assist in accomplishing it. But when the user wants to do a new task that the robot does not know about, shared autonomy will hinder their performance by attempting to assist them with something that is not their intent. Our key idea is that the robot can detect when its repertoire of intents is insufficient to explain the user's input, and give them back control. This then enables the robot to observe unhindered task execution, learn the new intent behind it, and add it to this repertoire. We demonstrate with both a case study and a user study that our proposed method maintains good performance when the human's intent is in the robot's repertoire, outperforms prior shared autonomy approaches when it isn't, and successfully learns new skills, enabling efficient lifelong learning for confidence-based shared autonomy.
  2. Dynamically Switching Human Prediction Models for Efficient Planning

    International Conference on Robot Learning (ICRA), 2021

    As environments involving both robots and humans become increasingly common, so does the need to account for people during planning. To plan effectively, robots must be able to respond to and sometimes influence what humans do. This requires a human model which predicts future human actions. A simple model may assume the human will continue what they did previously; a more complex one might predict that the human will act optimally, disregarding the robot; whereas an even more complex one might capture the robot's ability to influence the human. These models make different trade-offs between computational time and performance of the resulting robot plan. Using only one model of the human either wastes computational resources or is unable to handle critical situations. In this work, we give the robot access to a suite of human models and enable it to assess the performance-computation trade-off online. By estimating how an alternate model could improve human prediction and how that may translate to performance gain, the robot can dynamically switch human models whenever the additional computation is justified. Our experiments in a driving simulator showcase how the robot can achieve performance comparable to always using the best human model, but with greatly reduced computation.
  3. Feature Expansive Reward Learning: Rethinking Human Input

    ACM/IEEE International Conference on Human Robot Interaction (HRI), 2021
    Best Paper Award Finalist

    When a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input, but they rely on handcrafted features. When the correction cannot be explained by these features, recent work in deep Inverse Reinforcement Learning (IRL) suggests that the robot could ask for task demonstrations and recover a reward defined over the raw state space. Our insight is that rather than implicitly learning about the missing feature(s) from demonstrations, the robot should instead ask for data that explicitly teaches it about what it is missing. We introduce a new type of human input in which the person guides the robot from states where the feature being taught is highly expressed to states where it is not. We propose an algorithm for learning the feature from the raw state space and integrating it into the reward function. By focusing the human input on the missing feature, our method decreases sample complexity and improves generalization of the learned reward over the above deep IRL baseline. We show this in experiments with a physical 7DOF robot manipulator, as well as in a user study conducted in a simulated environment.

2020

  1. LESS is More: Rethinking Probabilistic Models of Human Behavior

    ACM/IEEE International Conference on Human Robot Interaction (HRI), 2020
    Best Paper Award Winner

    Robots need models of human behavior for both inferring human goals and preferences, and predicting what people will do. A common model is the Boltzmann noisily-rational decision model, which assumes people approximately optimize a reward function and choose trajectories in proportion to their exponentiated reward. While this model has been successful in a variety of robotics domains, its roots lie in econometrics, and in modeling decisions among different discrete options, each with its own utility or reward. In contrast, human trajectories lie in a continuous space, with continuous-valued features that influence the reward function. We propose that it is time to rethink the Boltzmann model, and design it from the ground up to operate over such trajectory spaces. We introduce a model that explicitly accounts for distances between trajectories, rather than only their rewards. Rather than each trajectory affecting the decision independently, similar trajectories now affect the decision together. We start by showing that our model better explains human behavior in a user study. We then analyze the implications this has for robot inference, first in toy environments where we have ground truth and find more accurate inference, and finally for a 7DOF robot arm learning from user demonstrations.
  2. Detecting Hypothesis Space Misspecification in Robot Learning from Human Input
    Andreea Bobu

    HRI Pioneers Companion of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2020.

    Learning from human input has enabled autonomous agents to perform increasingly more complex tasks that are otherwise difficult to carry out automatically. To this end, recent works have studied how robots can incorporate such input - like demonstrations or corrections - into objective functions describing the desired behaviors. While these methods have shown progress in a variety of settings, from semi-autonomous driving, to household robotics, to automated airplane control, they all suffer from the same crucial drawback: they implicitly assume that the person's intentions can always be captured by the robot's hypothesis space. We call attention to the fact that this assumption is often unrealistic, as no model can completely account for every single possible situation ahead of time. When the robot's hypothesis space is misspecified, human input can be unhelpful - or even detrimental - to the way the robot is performing its tasks. Our work tackles this issue by proposing that the robot should first explicitly reason about how well its hypothesis space can explain human inputs, then use that situational confidence to inform how it should incorporate them.

2019

  1. Quantifying Hypothesis Space Misspecification in Learning from Human-Robot Demonstrations and Physical Corrections

    IEEE Transactions on Robotics (T-RO)
    Best Paper Award Honorable Mention

    Human input has enabled autonomous systems to improve their capabilities and achieve complex behaviors that are otherwise challenging to generate automatically. Recent work focuses on how robots can use such input - like demonstrations or corrections - to learn intended objectives. These techniques assume that the human’s desired objective already exists within the robot’s hypothesis space. In reality, this assumption is often inaccurate: there will always be situations where the person might care about aspects of the task that the robot does not know about. Without this knowledge, the robot cannot infer the correct objective. Hence, when the robot’s hypothesis space is misspecified, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. In this paper, we posit that the robot should reason explicitly about how well it can explain human inputs given its hypothesis space and use that situational confidence to inform how it should incorporate human input. We demonstrate our method on a 7 degree-of-freedom robot manipulator in learning from two important types of human input: demonstrations of manipulation tasks, and physical corrections during the robot’s task execution.

2018

  1. Learning under Misspecified Objective Spaces

    Conference on Robot Learning (CoRL), 2018
    Invited to Special Issue

    Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human's desired objective lies within the robot's hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot's task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human's correction is for the robot's hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a 7DoF robot manipulator.
  2. Adapting to Continuously Shifting Domains
    Andreea Bobu, Eric Tzeng, Judy Hoffman, and Trevor Darrell.

    International Conference on Learning Representations (ICLR) Workshop, 2018

    Domain adaptation typically focuses on adapting a model from a single source domain to a target domain. However, in practice, this paradigm of adapting from one source to one target is limiting, as different aspects of the real world such as illumination and weather conditions vary continuously and cannot be effectively captured by two static domains. Approaches that attempt to tackle this problem by adapting from a single source to many different target domains simultaneously are consistently unable to learn across all domain shifts. Instead, we propose an adaptation method that exploits the continuity between gradually varying domains by adapting in sequence from the source to the most similar target domain. By incrementally adapting while simultaneously efficiently regularizing against prior examples, we obtain a single strong model capable of recognition within all observed domains. Our method is applicable on a wide variety of learning settings, including visual classification and reinforcement learning in a video game domain.

2016

  1. Patch-Based Discrete Registration of Clinical Brain Images

    Patch-based Techniques in Medical Imaging at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI-PATCHMI), 2016
    Best Paper Award Winner

    We introduce a method for registration of brain images acquired in clinical settings. The algorithm relies on three-dimensional patches in a discrete registration framework to estimate correspondences. Clinical images present significant challenges for computational analysis. Fast acquisition often results in images with sparse slices, severe artifacts, and variable fields of view. Yet, large clinical datasets hold a wealth of clinically relevant information. Despite significant progress in image registration, most algorithms make strong assumptions about the continuity of image data, failing when presented with clinical images that violate these assumptions. In this paper, we demonstrate a non-rigid registration method for aligning such images. The method explicitly models the sparsely available image information to achieve robust registration. We demonstrate the algorithm on clinical images of stroke patients. The proposed method outperforms state of the art registration algorithms and avoids catastrophic failures often caused by these images. We provide a freely available open source implementation of the algorithm.