Projects


Localizing Moments in Video with Natural Langauge

ICCV 2017

Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell

[Project Page] [PDF] [code]





We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions.

Captioning Images with Diverse Objects

Oral at CVPR 2017

Subhashini Venugopalan, Lisa Anne Hendricks, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell

[Project Page] [PDF] [code]







Recent captioning models are limited in their ability to scale and describe concepts unseen in paired image-text corpora. We propose the Novel Object Captioner (NOC), a deep visual semantic captioning model that can describe a large number of object categories not present in existing image-caption datasets. Our model takes advantage of external sources -- labeled images from object recognition datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which can learn from these diverse data sources and leverage distributional semantic embeddings, enabling the model to generalize and describe novel objects outside of image-caption datasets.

Generating Visual Explanations

ECCV 2016

Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell

[Project Page] [PDF] [code]





Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. We propose a novel loss function based on sampling and reinforcement learning that learns to generate sentences that realize a global sentence property, such as class specificity.

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

Oral at CVPR 2016

Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell

[Project Page] [PDF] [code]





While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts.


Deep Learning for Tactile Understanding From Visual and Haptic Data

ICRA 2016

Yang Gao, Lisa Anne Hendricks, Katherine J. Kuchenbecker, Trevor Darrell

[Project Page] [PDF]




Robots that need to interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces. Additionally, for certain tasks, robots may need to know the haptic properties of an object before touching it. To enable better tactile understanding for robots, we propose a method of classifying surfaces with haptic adjectives (e.g., compressible or smooth) from both visual and physical interaction data.


Long-term Recurrent Convolutional Networks for Visual Recognition and Description

Oral at CVPR 2015

Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell
[Project Page] [PDF (CVPR)] [PDF (TPAMI)]



We present the long-term recurrent convolutional network (LRCN) which combines convolutional neural networks with recurrent neural networks. We instantiate our model for three different vision applications: activity recognition, image description, and video description.