I work on natural language processing, specifically language grounding: interpreting and generating context-dependent language for real-world tasks like instruction following.
When people communicate, they reason about the world and their conversational partners. Can natural language systems do the same?
One effective approach to grounding is pragmatics: modeling people as cooperative agents who reason about each other.
We've found that pragmatic reasoning improves NLP systems for interpreting
(NeurIPS 2018) and generating (NAACL 2018, NAACL 2019) language.
Another approach is modularity: building neural modules that decompose a complex task (ACL 2019, arXiv 2020).
I'm also broadly interested in NLP, focusing recently on structured prediction (ACL 2020, TACL 2020) and syntactic parsing (ACL 2018, ACL 2019).
- Syntactic Structure Distillation Pretraining For Bidirectional Encoders
Adhiguna Kuncoro*, Lingpeng Kong*, Daniel Fried*, Dani Yogatama, Laura Rimell, Chris Dyer, and Phil Blunsom
- Learning to Segment Actions from Observation and Narration
Daniel Fried, Jean-Baptiste Alayrac, Phil Blunsom, Chris Dyer, Stephen Clark, and Aida Nematzadeh
- Cross-Domain Generalization of Neural Constituency Parsers
Daniel Fried*, Nikita Kitaev*, and Dan Klein
code & models
- Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation
Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, and Kate Saenko
- Pragmatically Informative Text Generation
Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein
- Speaker-Follower Models for Vision-and-Language Navigation
Daniel Fried*, Ronghang Hu*, Volkan Cirik*, Anna Rohrbach, Jacob Andreas,
Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein**, Trevor Darrell**
- Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Daniel Fried and Dan Klein
- Unified Pragmatic Models for Generating and Following Instructions
Daniel Fried, Jacob Andreas, and Dan Klein
- Effective Inference for Generative Neural Parsing
Mitchell Stern, Daniel Fried, and Dan Klein
- Improving Neural Parsing by Disentangling Model Combination and Reranking Effects
Daniel Fried*, Mitchell Stern*, and Dan Klein
- Towards Using Social Media to Identify Individuals At Risk for Preventable Chronic Illness
Dane Bell, Daniel Fried, Luwen Huangfu, Mihai Surdeanu, and Stephen Kobourov.
- Low-Rank Tensors for Verbs in Compositional Distributional Semantics
Daniel Fried, Tamara Polajnar, and Stephen Clark
- Higher-Order Lexical Semantic Models for Non-Factoid Answer Reranking
Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mihai Surdeanu, and Peter Clark
- Incorporating Both Distributional and Relational Semantics in Word Representations
Daniel Fried and Kevin Duh
ICLR, 2015 (workshop track).
long version [arxiv],
2014 and before
- Analyzing the Language of Food on Social Media
Daniel Fried, Mihai Surdeanu, Stephen Kobourov, Melanie Hingle, and Dane Bell
IEEE BigData, 2014.
long version [arxiv],
- Maps of Computer Science
Daniel Fried and Stephen Kobourov
- Predicting Parallelization of Sequential Programs Using Supervised Learning
Daniel Fried, Zhen Li, Ali Jannesari and Felix Wolf
IEEE ICMLA, 2013
- A Generative Probabilistic Framework for Learning Spatial Language
Colin R. Dawson, Jeremy Wright, Antons Rebguns, Marco Valenzuela Escárcega, Daniel Fried, and Paul R. Cohen
IEEE ICDL-EpiRob, 2013. Best Paper Award
- Bayesian Geometric Modeling of Indoor Scenes
Luca Del Pero, Joshua Bowdish, Daniel Fried, Bonnie Kermgard, Emily Hartley, and Kobus Barnard
- Co-instructor, Introduction to Artificial Intelligence (CS 188).
UC Berkeley, Summer 2018.
- Teaching assistant, Introduction to Artificial Intelligence (CS 188).
UC Berkeley, Fall 2017.
- Teaching assistant, Introduction to Discrete Structures (CS 245).
University of Arizona, Spring 2012.
- Teaching assistant, Great Ideas of the Information Age (ISTA 100).
University of Arizona, Fall 2011.