About me

selfie

Hi there! I'm a second year PhD student here at Berkeley, working on autonomous robotics and optimal control. I am advised by Prof. Claire Tomlin in the Hybrid Systems Lab. I am also a member of the Berkeley AI Research lab, and I am supported by an NSF Graduate Research Fellowship. I did my undergrad in electrical engineering at Princeton (Class of '15).

Outside of research, I like to play squash and ultimate frisbee, read historical fiction, and lately I've started getting back into acoustic guitar.

Contact info

The best way to reach me is by email. My address is: dfk at eecs dot berkeley dot edu.

Research interests

I am fundamentally interested in a couple different research areas and how they intersect with the broad field of robotics:

Optimal control

Basically I'm interested in how robots should make good decisions. In optimal controls, we pose this question as an optimization problem over some time horizon, call the solver, and go.

Reinforcement learning

Reinforcement learning is another approach to robot decision making. Here, rather than solve an explicit optimization problem, we pose the robot's interaction with its environment as a Markov Decision Process (MDP) and leverage tools from dynamic programming and supervised learning (like deep neural networks) to solve the MDP.

Information theory

Information theory was probably my favorite grad class at Berkeley, and ever since taking it I've had this strange feeling that every problem in robotics is somehow related to channel coding or rate distortion... Currently I'm using rate-distortion theory to construct a bound on controller performance on a power grid (click on the "Research" tab above).

Links

Github

Google Scholar

LinkedIn