Current
Research
My current
research focuses on:
· Using what we’ve
learned about parallel and distributed computing to speed up the training of
Deep Neural Nets – e.g. FireCaffe
· Using what we’ve learned
about embedded computing to design and implement fast, accurate,
energy-efficient neural nets for computer vison problems – e.g. SqueezeNet, SqueezeDet, SqueezeSeg, SqueezeNext etc.
· Using what we’ve
learned about mapping Deep Neural Nets to embedded hardware to explore the co-design
of DNNs and NN accelerators – e.g. the Squeezelerator
I currently
have post-doctoral research positions in each of these areas. Please contact me
if you’re interested.
Past
Research Projects
Exploring
Design Patterns for Parallel Computing
Modern Embedded Systems
Compilers Architectures and Languages (MESCAL)
Closing
the performance gap between ASIC and custom designs
Closing
the power gap between ASIC and custom designs
Evaluating the impact of deep
submicron process geometries on computer-aided design of integrated circuits
using Berkeley Advanced
Chip Performance Calculator (BACPAC).
Compilation of software for
popular embedded processors - especially DSP's (eg.
SPAM).