Kate Rakelly

I am a PhD student at UC Berkeley, where I am co-advised by Sergey Levine and Alyosha Efros as part of BAIR.

Previously, I graduated with a Bachelor's in EECS from UC Berkeley, where I worked with Shiry Ginosar and Alyosha Efros in computer vision as well as Insoon Yang and Claire Tomlin in control.

Email  /  CV  /  Google Scholar  /  LinkedIn


I'm interested in designing learning algorithms that can learn from varying amounts of supervision, and that can scale to complex visual and robotic manipulation tasks. To that end, my work over the past year has focused on learning latent task embeddings, a way to capture knowledge about the current task at hand that allows the agent to quickly apply that knowledge in the future. Previously, I have worked on a method for semantic segmentation in video using dyanmic CNN architectures as well as domain adaptation for segmentation and detection models. In undergrad, I contributed to a project applying semi-supervised learning techniques to historical photographs to discover trends in fashion and hairstyle over the past century.

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Kate Rakelly*, Aurick Zhou*, Deirdre Quillen, Chelsea Finn, Sergey Levine
ICML, 2019
Code Slides

Leverage off-policy learning and a probabilistic belief over the task to make meta-RL 20-100X more sample efficient. PEARL performs online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration during adaptation. Unlike prior approaches, our method integrates easily with existing off-policy RL algorithms, greatly improving meta-training sample efficiency.

Few-Shot Segmentation Propagation with Guided Networks
Kate Rakelly*, Evan Shelhamer*, Trevor Darrell, Alyosha Efros, Sergey Levine
Preprint, 2018

Few-shot learning meets segmentation: given a few labeled pixels from few images, segment new images accordingly. Our guided network extracts a latent task representation from any amount of supervision and is optimized end-to-end for fast, accurate segmentation of new inputs. We show state-of-the-art results for speed and amount of supervision on three segmentation problems that are usually treated separately: interactive, semantic, and video object segmentation. Our method is fast enough to perform real-time interactive video object segmentation.

Clockwork Convnets for Video Semantic Segmentation
Evan Shelhamer*, Kate Rakelly*, Judy Hoffman*, Trevor Darrell
Video Semantic Segmentation Workshop at European Conference in Computer Vision (ECCV), 2016

A fast video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of "clockwork" convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability.

A Century of Portraits: A Visual Historical Record of American High School Yearbooks
Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, Alyosha Efros
IEEE Transactions on Computational Imaging, 2017
Extreme Imaging Workshop, International Conference on Computer Vision (ICCV), 2015
Project Page
Portrait Dating Code

What were the tell-tale fashions of the 1950s? Did everyone in the 70s really have long hair? In this age of selfies, is it true that we smile in photos more than we used to? And can a CNN pick up on all these trends to accurately date an old portrait? We address these questions and many others using data-driven semi-supervised learning techniques on a novel dataset of American high school yearbook photos from the past 100 years.


An Overview of Meta-Reinforcement Learning - guest lecture in CS294 at UC Berkeley

Exploration in Meta-RL - guest lecture in CS330 at Stanford University

Efficient Meta-RL with Probabilistic Context Embeddings - contributed talk to the Workshop on Structure and Priors in RL at ICLR 2019


Learning to Learn with Probabilistic Task Embeddings - on the BAIR blog about our work on off-policy meta-RL.


A collection of collateral damage from doing research that might be useful to others.

pytorch-maml - a PyTorch implementation of Model-Agnostic Meta Learning for supervised learning.


CS294-112 - Fall 2018 (Head Teaching Assistant)

Deep Reinforcement Learning is a special topics course covering modern deep reinforcement learning techniques.

CS70 - Summer 2014 (Teaching Assistant)

Discrete Mathematics for Computer Science covers proof techniques, modular arithmetic, polynomials, and probability.

EE40 - Summer 2013 (Teaching Assistant)

Introduction to Circuits covers analyzing, designing, and building electronic circuits using op amps and passive components. (Note this class along with EE20 have been replaced by the EE16A/B series as of Fall 2015.)

(this guy makes a nice wesbite)