Hi there! I'm a second year PhD student here at Berkeley, working on autonomous robotics and optimal control. I am advised by Prof. Claire Tomlin in the Hybrid Systems Lab. I am also a member of the Berkeley AI Research lab, and I am supported by an NSF Graduate Research Fellowship. I did my undergrad in electrical engineering at Princeton (Class of '15).
Outside of research, I like to play squash and ultimate frisbee, read historical fiction, and lately I've started getting back into acoustic guitar.
The best way to reach me is by email. My address is: dfk at eecs dot berkeley dot edu.
I am fundamentally interested in a couple different research areas and how they intersect with the broad field of robotics:
Basically I'm interested in how robots should make good decisions. In optimal controls, we pose this question as an optimization problem over some time horizon. Often, we have some system model when we talk about optimal control problems, but sometimes that model is learned or adapted over time, and sometimes models are probabilistic.
Reinforcement learning is another approach to robot decision making. Although different people draw the line between optimal control and reinforcement learning in different places, for me the key distinction is that reinforcement learning aims to perform well over long time horizons, taking into account potentially complicated or unknown system dynamics and reward structures.
Information theory was probably my favorite grad class at Berkeley, and ever since taking it I've had this strange feeling that every problem in robotics is somehow related to channel coding or rate distortion... One current application I've been working on is using rate-distortion theory to bound controller performance in a decentralized control problem (click on the "Research" tab above).