Maxim Rabinovich

About Me

I am a PhD student in Computer Science at UC Berkeley, jointly advised by Mike Jordan and Dan Klein. I am primarily interested in methods that extend and refine human reasoning in settings where partial automation is needed to cope with large-scale or ambiguous information sources. The focus of my work in statistics and machine learning is the development of the theory and practice of such methods. Recent work in this direction includes projects on minimax theory for multiple testing, code generation from natural language specifications, fine-grained entity typing, and function-specific mixing rates for MCMC.

I am supported by a Hertz Google Fellowship and an NSF Graduate Research Fellowship.

Before coming to Berkeley, I received my AB in Mathematics (with Highest Honors) from Princeton University, where I was advised by David Blei, and received my MPhil in Information Engineering from the University of Cambridge, where I was advised by Zoubin Ghahramani. I also spent a summer at Xerox Research Centre Europe working with Cédric Archambeau.

In summer 2016, I interned in the Google Research Semantics group in NYC, working with Emily Pitler.

Further details can be found in my CV.


Optimal rates and tradeoffs in multiple testing
Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, and Martin J. Wainwright.
Statistica Sinica.

Abstract syntax networks for code generation and semantic parsing
Maxim Rabinovich*, Mitchell Stern*, and Dan Klein. ACL 2017.

Fine-grained entity typing with high-multiplicity assignments
Maxim Rabinovich and Dan Klein. ACL 2017.

Quantitative criticism of literary relationships
Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS.

Function-specific mixing times and concentration away from equilibrium
Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, and Martin J. Wainwright.
Bayesian Analysis (to appear).

Variational consensus Monte Carlo
Maxim Rabinovich, Elaine Angelino, and Michael I. Jordan. NIPS 2015.

On the accuracy of self-normalized log-linear models
Jacob Andreas, Maxim Rabinovich, Dan Klein, and Michael I. Jordan. NIPS 2015.

Efficient Inference for Unsupervised Semantic Parsing
Maxim Rabinovich and Zoubin Ghahramani. NIPS 2014, Workshop on Learning Semantics.

The Inverse Regression Topic Model
Maxim Rabinovich and David M. Blei. ICML 2014.

Technical Reports and Theses

Efficient Inference for Unsupervised Semantic Parsing
Maxim Rabinovich. MPhil Thesis, University of Cambridge.
Supervised by Zoubin Ghahramani.

Online Inference for Relation Extraction with a Reduced Feature Set
Maxim Rabinovich and Cédric Archambeau. Arxiv Report.

Inverse Regression Topic Modeling: Models, Inference, and Applications
Maxim Rabinovich. Undergraduate Thesis, Princeton University.
Supervised by David Blei.
Middleton Miller Prize for Best Senior Thesis

Odds and Ends

When I'm not thinking about statistics or machine learning, I work on overcoming gravity or cooking systematically. I also enjoy reading, especially non-fiction about the way the world around us works, but with some modernist and post-postmodernist fiction thrown in. On occasion, I run long distances.