Deep Compositional Captioning:
Describing Novel Object Categories without Paired Training Data

Authors: Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell

Oral at CVPR 2016 [PDF]



Abstract: While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image-sentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-caption data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.

Examples:
We empirically demonstrate our model by holding out eight different classes from the MSCOCO dataset. We then demonstrate that DCC can be used to describe objects in ImageNet for which no paired image-sentence data exist. Below we show example image descriptions for ImageNet objects.



We further demonstrate the efficacy of DCC by describing novel words in video.



Code: For initial code release please look at my github: https://github.com/LisaAnne/DCC. Please email me if you have any questions! I have gotten a few questions about training my model. I wanted to clarfiy that in Table 3 of DCC, the model does not see any MSCOCO images for the eight held out objects and does not see any MSCOCO sentences which mention any of the eight held out objects. At test time, I use MSCOCO images to observe the domain shift between training with out of domain images and out of domain sentences. I argue that this is important so we can get an idea of how DCC will perform in a real world setting where it does not have access to in-domain text or images.

Reference: If you find this useful in your work please consider citing:

@inproceedings{hendricks16cvpr,
        title = {Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data},
        author = {Hendricks, Lisa Anne and Venugopalan, Subhashini and Rohrbach, Marcus and Mooney, Raymond, and Saenko Kate, and Darrell, Trevor},
       booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
       year = {2016}
}