I'm currently a postdoc at UC Berkeley working with Anca Dragan and Ken Goldberg. I received my PhD in 2020 from the CS department at UT Austin where I was advised by Scott Niekum. I'm interested in safe reward inference and inverse reinforcement learning. In particular, I'm working on developing methods that give a robot or other autonomous agent the ability to provide high-confidence bounds on performance when learning a policy from a limited number of demonstrations, ask risk-aware queries to better resolve ambiguities and perform robust policy optimization from human demonstrations, learn more efficiently from informative demonstrations, learn from ranked suboptimal demonstrations, even when rankings are unavailable, and perform fast Bayesian reward inference for visual control tasks.
Before starting my PhD, I worked as a research scientist at the Air Force Research Lab's Information Directorate in Rome, New York, where I studied classification and discovery of collective behaviors in swarm robotics, risk-sensitive physical search, and multi-agent planning problems.
I earned my master's degree in Computer Science from Brigham Young University under the advisement of Mike Goodrich, where I studied controlability and observability problems in human-swarm interactions. I also obtained my bachelor's degree in Mathematics at Brigham Young University and completed an honors thesis on investment portfolio optimization under the advisement of Sean Warnick.