Finding Action Tubes

April, 2015

Abstract

We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.

paper


Models

Reference and pretrained models: models(2.8GB)

Reference models
Reference models were used to initialize the networks in Finding Action Tubes. For spatial-CNN, we use a model trained on VOC 2012 train set for the task of detection. For motion-CNN, we use a model trained on UCF101 (split 1) with optical flow. This model achieves 72.2% accuracy on split1 test.
Pretrained models
Pretrained models on J-HMDB and UCF sports, for spatial- and motion-CNN, are provided.


Code

Before using the available source code, you need to install Caffe.

Action Tubes github repo: ActionTubes_github

You can find very useful instructions in the README. Please read it before you use the source code.


UCF Sports Benchmark

UCF Sports evaluation of Action Tubes and other approaches: UCF Sports


How to cite

When citing our system, please cite this work. The bibtex entry is provided below for your convenience.

@article{actiontubes,
  Author = {G. Gkioxari and J. Malik},
  Title = {Finding Action Tubes},
  Booktitle = {CVPR},
  Year = {2015}}


Contact

For any questions regarding the work or the implementation, contact the author at gkioxari@eecs.berkeley.edu