Maxim Rabinovich

About Me

I am a third-year PhD student in Computer Science at UC Berkeley, jointly advised by Mike Jordan and Dan Klein. I am primarily interested in methods that extend and refine human reasoning in settings where partial automation is needed to cope with large-scale or ambiguous information sources. The focus of my work in statistics and machine learning is the development of the theory and practice of such methods. Recent work in this direction includes projects on minimax theory for multiple testing, code generation from natural language specifications, fine-grained entity typing, and function-specific mixing rates for MCMC.

I am supported by a Hertz Google Fellowship and an NSF Graduate Research Fellowship.

Before coming to Berkeley, I received my AB in Mathematics (with Highest Honors) from Princeton University, where I was advised by David Blei, and received my MPhil in Information Engineering from the University of Cambridge, where I was advised by Zoubin Ghahramani. I also spent a summer at Xerox Research Centre Europe working with Cédric Archambeau.

In summer 2016, I interned in the Google Research Semantics group in NYC, working with Emily Pitler.

Further details can be found in my CV.

Publications

Optimal rates and tradeoffs in multiple testing
Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, and Martin J. Wainwright. In submission.
[PDF][Arxiv]

Abstract syntax networks for code generation and semantic parsing
Maxim Rabinovich*, Mitchell Stern*, and Dan Klein. ACL 2017.
[PDF][Arxiv]

Fine-grained entity typing with high-multiplicity assignments
Maxim Rabinovich and Dan Klein. ACL 2017.
[PDF][Arxiv]

Quantitative criticism of literary relationships
Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS.
[PDF][PNAS]

Function-specific mixing times and concentration away from equilibrium
Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, and Martin J. Wainwright. In submission.
[PDF][Arxiv]

Variational consensus Monte Carlo
Maxim Rabinovich, Elaine Angelino, and Michael I. Jordan. NIPS 2015.
[PDF][Arxiv]

On the accuracy of self-normalized log-linear models
Jacob Andreas, Maxim Rabinovich, Dan Klein, and Michael I. Jordan. NIPS 2015.
[PDF][Arxiv]

Efficient Inference for Unsupervised Semantic Parsing
Maxim Rabinovich and Zoubin Ghahramani. NIPS 2014, Workshop on Learning Semantics.
[PDF]

The Inverse Regression Topic Model
Maxim Rabinovich and David M. Blei. ICML 2014.
[PDF]

Technical Reports and Theses

Efficient Inference for Unsupervised Semantic Parsing
Maxim Rabinovich. MPhil Thesis, University of Cambridge.
Supervised by Zoubin Ghahramani.
[PDF]

Online Inference for Relation Extraction with a Reduced Feature Set
Maxim Rabinovich and Cédric Archambeau. Arxiv Report.
[PDF][Arxiv]

Inverse Regression Topic Modeling: Models, Inference, and Applications
Maxim Rabinovich. Undergraduate Thesis, Princeton University.
Supervised by David Blei.
Middleton Miller Prize for Best Senior Thesis
[PDF]

Odds and Ends

When I'm not thinking about research, I like to boulder (a lot) and read. I have a particular soft spot for existential philosophy, modernist and post-postmodernist fiction, and culinary experiments. On occasion, I run long distances.