Hi there! I'm a fifth year PhD student here at Berkeley, working on autonomous robotics and optimal control. I am advised by Prof. Claire Tomlin in the Hybrid Systems Lab. I'm also a member of the Berkeley AI Research lab, and I am supported by an NSF Graduate Research Fellowship. I did my undergrad in electrical engineering at Princeton (2015).
Outside of research, I like to play squash and frisbee, read fantasy novels, and play acoustic guitar.
The best way to reach me is by email. My address is: dfk at eecs dot berkeley dot edu.
I am generally interested in optimal control, motion planning, and safe autonomy. So far, I have primarily worked on game-theoretic techniques for motion planning that directly encode robustness to external disturbances and actions of other agents, and operate in real-time. My work so far falls into two main categories: efficient motion planning with reachability-based safety guarantees, and solving large-scale multi-player nonlinear differential games. I have also worked on a number of other projects related to reinforcement learning, distributed control, adaptive receding horizon control, and active search.
In motion planning and control, there is often a division between methods that work well in real time, and those that provide strict safety guarantees. For example, iterative LQR (iLQR) is a popular method of generating smooth optimal control sequences in real time for relatively high-dimensional systems; yet iLQR does not guarantee robustness, e.g. against environmental disturbances. By contrast, Hamilton-Jacobi (HJ) reachability can provide hard safety guarantees and disturbance rejection but for general nonlinear systems is only tractable in fairly low dimension.
FaSTrack is a new approach that uses an offline reachability computation to inform an online motion planner like iLQR to blend the best of these two types of algorithms. I have worked on several projects extending FaSTrack. In Planning, Fast and Slow, we broaden the concept of motion planning to allow for multiple different types of planning algorithms (with potentially different notions of state) to run concurrently and stitch together seamlessly while retaining the original FaSTrack safety guarantee. We have also tested a neural network-based HJ reachability solver and used it to compute conservative approximations to reachable sets for higher dimensional systems. Finally, I recently built a high-level graph-based wrapper around kinodynamic planners to extend the modular FaSTrack guarantees to a priori unknown environments while retaining recursive safety and liveness.
It's always nice to be able to trust that, when a motion planner says the system remains safe we can trust that, at runtime, the system will actually remain safe despite potential modeling errors. That's precisely why studying robust control is important. However, an autonomous system might also encounter other moving agents at runtime, and when this happens the system has to make some assumptions on how they might behave. Unfortunately, any model we might construct to predict the motion of these other agents will, in general, fail systematically in some situations; to be robust to this inevitability, we proposed a novel Bayesian framework for inferring our model confidence and dynamically adjusting predictions online to be more conservative whenever model confidence drops. We took a first step toward incorporating uncertain predictions of a pedestrian into FaSTrack in a recent paper at RSS 2018 and extended at IJRR, and we recently extended this approach to work with multiple pedestrians and multiple robots.
Differential games are a widely-applicable mathematical tool, and offer an attractive alternative to traditional formulations of motion planning problems. In particular, motion planning problems are often posed in either static environments or dynamic environments where the predicted motion of other agents is completely independent from the robot's planned trajectory. Unfortunately, this can put an undue burden on the predictive model to be precise despite enormous uncertainty. Differential game theory offers an exciting alternative; rather than fix a prediction beforehand, we can presume that other agents are optimizing some known (or estimated) objectives, and solve a differential game to find a local equilibrium. Effectively coupling prediction and planning, this approach shifts the enormous burden of making accurate predictions to the potentially more straightforward task of modeling short-term dynamic objectives.
Until relatively recently, differential games were widely considered to be computationally intractible for general nonlinear systems and multiple players with arbitrary objectives. Several approximation techniques have been explored in the literature, but to my knowledge none have been seriously considered in the industry. My own work in this area consists of a fast second-order solver based on iterative LQR--a standard algorithm for nonlinear MPC used in the autonomous vehicle industry. I am actively working on a real-time C++ implementation, available open-source. I've done some preliminary tests in the lab and should have a video to post soon from a sweet demo onboard a Boeing experimental aircraft! Stay tuned!