Object Level Visual Reasoning in Videos

Abstract : Human activity recognition is typically addressed by detecting key concepts like global and local motion, features related to object classes present in the scene, as well as features related to the global context. The next open challenges in activity recognition require a level of understanding that pushes beyond this and call for models with capabilities for fine distinction and detailed comprehension of interactions between actors and objects in a scene. We propose a model capable of learning to reason about semantically meaningful spatio-temporal interactions in videos. The key to our approach is a choice of performing this reasoning at the object level through the integration of state of the art object detection networks. This allows the model to learn detailed spatial interactions that exist at a semantic, object-interaction relevant level. We evaluate our method on three standard datasets (Twenty-BN Something-Something, VLOG and EPIC Kitchens) and achieve state of the art results on all of them. Finally, we show visualizations of the interactions learned by the model, which illustrate object classes and their interactions corresponding to different activity classes.
Document type :
Conference papers
Complete list of metadatas

Cited literature [41 references]  Display  Hide  Download

https://hal.inria.fr/hal-01828872
Contributor : Christian Wolf <>
Submitted on : Thursday, September 6, 2018 - 10:51:54 AM
Last modification on : Tuesday, July 2, 2019 - 4:02:04 PM
Long-term archiving on : Friday, December 7, 2018 - 3:01:25 PM

File

eccv2018.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01828872, version 1

Citation

Fabien Baradel, Nathalia Neverova, Christian Wolf, Julien Mille, Greg Mori. Object Level Visual Reasoning in Videos. ECCV 2018 - European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.1-17. ⟨hal-01828872⟩

Share

Metrics

Record views

354

Files downloads

150