I'm a second year PhD Student in EECS at UC Berkeley advised by John Canny. I am part of the Berkeley Institute of Design, BAIR, and Berkeley PATH, and I study how we can explain the actions of autonomous agents in a human-driven and understandable way.

CV / Google Scholar / GitHub / LinkedIn

Autonomous Imaging and Mapping of Small Bodies Using Deep Reinforcement Learning
David M. Chan, Ali-akbar Agha-Mohammadi
IEEE Aerospace Conference, 2019.

The mapping and navigation around small unknown bodies continues to be an extremely interesting and exciting problem in the area of space exploration. Traditionally, the spacecraft trajectory for mapping missions is designed by human experts using hundreds of hours of human time to supervise the navigation and orbit selection process. While the current methodology has performed adequately for previous missions (such as Rosetta, Hayabusa and Deep Space), as the demands for mapping missions expand, additional autonomy during the mapping and navigation process will become necessary for mapping spacecraft. In this work we provide the framework for optimizing the autonomous imaging and mapping problem as a Partially Observable Markov Decision Process (POMDP). In addition, we introduce a new simulation environment which simulates the orbital mapping of small bodies and demonstrate that policies trained with our POMDP formulation are able to maximize map quality while autonomously selecting orbits and supervising imaging tasks. We conclude with a discussion of integrating Deep Reinforcement Learning modules with classic flight software systems, and some of the challenges that could be encountered when using Deep RL in flight-ready systems.

GPU-Accelerated t-SNE and its Applications to Modern Data
David M. Chan, Roshan Rao, Forrest Huang, John F. Canny
High Performance Machine Learning (HPML), 2018.
Outstanding Paper Award

This paper introduces t-SNE-CUDA, a GPU-accelerated implementation of t-distributed Symmetric Neighbor Embedding (t-SNE) for visualizing datasets and models. t-SNE-CUDA significantly outperforms current implementations with 50-700x speedups on the CIFAR-10 and MNIST datasets. These speedups enable, for the first time, visualization of the neural network activations on the entire ImageNet dataset - a feat that was previously computationally intractable. We also demonstrate visualization performance in the NLP domain by visualizing the GloVe embedding vectors. From these visualizations, we can draw interesting conclusions about using the L2 metric in these embedding spaces.

Rapid Randomized Restarts for Multi-Agent Path Finding: Preliminary Results
Liron Cohen, Sven Koenig, TK Kumar, Glenn Wagner, Howie Choset, David M. Chan, Nathan Sturtevant
Proceedings of the 17th International Conference on Autonomous Agents and Multi-Agent Systems, 2018

Multi-Agent Path Finding (MAPF) is an NP-hard problem with many real-world applications. However, existing MAPF solvers are deterministic and perform poorly on MAPF instances where many agents interfere with each other in a small region of space. In this paper, we enhance MAPF solvers with randomization and observe that their runtimes can exhibit heavy-tailed distributions. This insight leads us to develop simple Rapid Randomized Restart (RRR) strategies with the intuition that multiple short runs will have a better chance of solving such MAPF instances than one long run with the same runtime limit. Our contribution is to show experimentally that the same RRR strategy indeed boosts the performance of two state-of-the-art MAPF solvers, namely M* and ECBS.

Using Hierarchical Constraints to Avoid Conflicts in Multi-Agent Pathfinding
Thayne T Walker, David M. Chan, Nathan R Sturtevant
International Conference on Automated Planning and Scheduling (ICAPS), 2017

Recent work in multi-agent path planning has provided a number of optimal and suboptimal solvers that can efficiently find solutions to problems of growing complexity. Solvers based on Conflict-Based Search (CBS) combine single-agent solvers with shared constraints between agents to find feasible solutions. Suboptimal variants of CBS introduce alternate heuristics to avoid conflicts. In this paper we study the multiagent planning problem in the context of non-holonomic vehicles planning on a lattice. We propose that in addition to using heuristics to avoid conflicts, we can plan using a hierarchy of movement constraints to efficiently avoid conflicts. We develop a new extension to the CBS algorithm, CBS with constraint layering (CBS+ CL), which iteratively applies different movement constraint models during the CBS planning process. Our results show that this approach allows us to solve for about 2.4 times more agents in the same amount of time when compared to regular CBS without using a constraint hierarchy.

Going Deeper in Facial Expression Recognition Using Deep Neural Networks
Ali Mollahosseini, David M. Chan, Mohammad H Mahoor
IEEE Applications of Computer Vision (WACV), 2016

Automated Facial Expression Recognition (FER) has remained a challenging and interesting problem in computer vision. Despite efforts made in developing various methods for FER, existing approaches lack generalizability when applied to unseen images or those that are captured in wild setting (i.e. the results are not significant). Most of the existing approaches are based on engineered features (e.g. HOG, LBPH, and Gabor) where the classifier's hyper-parameters are tuned to give best recognition accuracies across a single database, or a small collection of similar databases. This paper proposes a deep neural network architecture to address the FER problem across multiple well-known standard face datasets.

Facial Expression Recognition From The World Wild Web
Ali Mollahosseini, Behzad Hasani, Michelle J Salvador, Hojjat Abdollahi,David M. Chan, Mohammad H Mahoor
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016

Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (eg happy face, laughing man, etc.). The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%.

Here's a bunch of the projects that I've worked on! Click on the links below the titles to follow through to the code!


Rinokeras is a set of tools developed by the CannyLab in Keras designed to aid in efficient research into sequence-based models.


Flux is a set of data/dataset utilities developed by the CannyLab. It is designed to do the data wrangling, so you don't have to.

Real-Time FR

Real Time Facial Expression Recognition Using Tensorflow.


ArXiV Notify is a Python-3 bot which is designed to allow you to get daily/weekly updates about papers on ArXiV.

Countdown Solver

A solver for the british game Countdown! Written in Flask/JS.