I am a post doc in the AMP lab at UC Berkeley working with Ben Recht. My research interests include multi-armed bandit problems, active learning, stochastic optimization, and sequential testing in statistics. My work ranges from theory, to practical algorithms with guarantees, to open-source machine learning systems. I received my PhD from the ECE department at the University of Wisconsin - Madison in March 2015 under the advisement of Robert Nowak. Before that, I received my MS in EE from Columbia University under the advisement of Rui Castro and my BS in EE from the University of Washington under the advisement of Maya Gupta. A brief overview of my work can be found in my CV.
Towards a richer undertanding of adaptive sampling in the moderate-confidence regime, Max Simchowitz, Kevin G Jamieson, Benjamin Recht, Preprint, 2016. PDF
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization, Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar, Preprint, 2016. PDF
Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations, Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg, International Conference on Robotics and Automation (ICRA), 2017. PDF
Finite Sample Prediction and Recovery Bounds for Ordinal Embedding, Lalit Jain, Kevin Jamieson, Robert Nowak, NIPS, 2016. PDF
Best-of-K Bandits, Max Simchowitz, Kevin Jamieson, Benjamin Recht, COLT, 2016. PDF
Non-stochastic Best Arm Identification and Hyperparameter Optimization, Kevin Jamieson, Ameet Talwalkar, AISTATS, 2016. PDF
Top Arm Identification in Multi-Armed Bandits with Batch Arm Pulls, Kwang-Sung Jun, Kevin Jamieson, Robert Nowak, Xiaojin Zhu, AISTATS, 2016. PDF
NEXT: A System for Real-World Development, Evaluation, and Application of Active Learning, Kevin Jamieson, Lalit Jain, Chris Fernandez, Nick Glattard, Robert Nowak, NIPS, 2015. PDF
The Analysis of Adaptive Data Collection Methods for Machine Learning, Kevin Jamieson, PhD Thesis, University of Wisconsin - Madison, March 2015. PDF
Sparse Dueling Bandits, Kevin Jamieson, Sumeet Katariya, Atul Deshpande, and Robert Nowak, AISTATS, 2015. PDF
Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting, Kevin Jamieson and Robert Nowak, CISS, 2014. PDF
lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, COLT, 2014. PDF
On Finding the Largest Mean Among Many, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, Asilomar, 2013. PDF
Query Complexity of Derivative-Free Optimization, Kevin Jamieson, Robert Nowak, and Ben Recht, Neural Information Processing Systems (NIPS), 2012. PDF (Extended version)
Active Ranking using Pairwise Comparisons, Kevin Jamieson and Robert Nowak, Neural Information Processing Systems (NIPS), 2011. PDF (Extended version)
Low-Dimensional Embedding using Adaptively Selected Ordinal Data Kevin Jamieson and Robert Nowak, Allerton Conference on Communication, Control, and Computing, 2011. PDF
Channel-Robust Classifiers, Hyrum S. Anderson, Maya R. Gupta, Eric Swanson, and Kevin Jamieson, IEEE Trans. on Signal Processing, 2010.
Training a support vector machine to classify signals in a real environment given clean training data, Kevin Jamieson, Maya R. Gupta, Eric Swanson and Hyrum S. Anderson, Proc. IEEE ICASSP, 2010.
Sequential Bayesian Estimation of the Probability of Detection for Tracking, Kevin Jamieson, Maya R Gupta, and David Krout, Proc. IEEE Conference on Information Fusion, 2009.
Hyperband: Bandits for hyperparameter tuning
Hyperband is a method for speeding up hyperparameter search. In contrast to Bayesian methods that focus energy on making better selections, Hyperband uses simple random search but exploits the iterative nature of training algorithms using recent advances in pure-exploration multi-armed bandits. Up to orders of magnitude improvements over Bayesian optimization are achievable on deep learning tasks.
The New Yorker Caption Contest
Each week, the New Yorker magazine runs a cartoon contest where readers are invited to submit a caption for that week's cartoon - thousands are submitted. The NEXT team has teamed up with Bob Mankoff, cartoon editor of the New Yorker, to use crowdsourcing and adaptive sampling techniques to help decide the caption contest winner each week. This is an example of state-of-the-art active learning being implemented and evaluated in the real world using the NEXT system and the principles developed in that paper.
NEXT is a computational framework and open-source machine learning system that simplifies the deployment and evaluation of active learning algorithms that use human feedback, e.g. from Mechanical Turk. The system is optimized for the real-time computational demands of active learning algorithms and built to scale to handle a crowd of workers any size. The system is for active learning researchers as well as practitioners who want to collect data adaptively.
Beer Mapper began as a practical implementation of my theoretical active ranking work on an iPhone/iPad to be used simply as a proof of concept and a cool prop to use in presentations of the theory. A brief page on my website descibring how it worked collected dust for several months until several blogs found it translating into large traffic and interest in it being brought to the app store. I teamed up with the tech startup Savvo based out of Chicago that is now leading the development of the app.