Jacob Andreas

I study language and machine learning. I'm interested in natural language processing problems as a window into reasoning, planning and perception; these days I'm especially focused on using language as a scaffold for more efficient learning and as a probe for understanding model behavior. I'm also broadly interested in structured neural methods that combine the advantages of deep representations and discrete compositionality.

I'm a fourth-year Ph.D. student in the Berkeley NLP Group and the Berkeley AI Research Lab. Previously I worked with the Cambridge NLIP Group, and the Center for Computational Learning Systems and NLP Group at Columbia.

Curriculum vitæ, Google scholar, elsewhere


All publications


Research highlights

Language for interpretable AI

By now we're all familiar with the complaint that complex machine learning models—especially deep networks—provide improved performance at the cost of interpretability. How can we help users understand the features and strategies that their models discover? Language provides a rich set of tools for describing beliefs, observations, and plans. We put these tools to work and generate natural language explanations directly from learned representations.

Papers:
Translating neuralese (ACL 2017)
Analogs of linguistic structure in deep representations (EMNLP 2017)

Language and reasoning

The dominant paradigm in deep learning is a "one-size-fits-all" approach: we write down a fixed model architecture that we hope captures everything about the relationship between our inputs and outputs. But real-world problem solving doesn't work this way: it involves a variety of different capabilities, combined and synthesized in new ways for every challenge we encounter. This work explores modular deep learning architectures that can dynamically reassemble themselves in response to changing, complex tasks.

Papers:
Modular multitask reinforcement learning with policy sketches (ICML 2017)
Learning to reason: end-to-end module networks for visual question answering (ICCV 2017)
Learning to compose neural networks for question answering (NAACL 2016)

At various points I have also been interested in semantic parsing, pragmatics, graph automata, self-normalization, and pianos.


I am currently supported by a Facebook fellowship and a Huawei / Berkeley AI fellowship. I was a Churchill scholar from 2012–2013 and a National Science Foundation fellow from 2013–2016.

Collaboration graph trivia: My Erdős number is at most four (J Andreas to K McKeown to Z Galil to N Alon to P Erdős). My Kevin Bacon number (and consequently my Erdős-Bacon number) remains lamentably undefined, but my Kevin Knight number (since apparently that's a thing) is one. I have never starred in a film with Kevin Knight. Noam Chomsky is my great-great-grand-advisor (J Andreas to D Klein to C Manning to J Bresnan to N Chomsky).