VideoLightFormer: Lightweight action recognition using transformers

Abstract

Efficient video action recognition remains a challenging problem. One large model after another takes the place of the state-of-the-art on the Kinetics dataset, but real-world efficiency evaluations are often lacking. In this work, we fill this gap and investigate the use of transformers for efficient action recognition. We propose a novel, lightweight action recognition architecture, VideoLightFormer. In a factorized fashion, we carefully extend the 2D convolutional Temporal Segment Network with transformers, while maintaining spatial and temporal video structure throughout the entire model. Existing methods often resort to one of the two extremes, where they either apply huge transformers to video features, or minimal transformers on highly pooled video features. Our method differs from them by keeping the transformer models small, but leveraging full spatiotemporal feature structure. We evaluate VideoLightFormer in a high-efficiency setting on the temporally-demanding EPIC-KITCHENS-100 and Something-Something-V2 (SSV2) datasets and find that it achieves a better mix of efficiency and accuracy than existing state-of-the-art models, apart from the Temporal Shift Module on SSV2.

Publication
arXiv preprint arXiv:2107.00451
Raivo Koot
Raivo Koot
BSc Student (now an MLOps Engineer at Apple)
Haiping Lu
Haiping Lu
Director of the UK Open Multimodal AI Network, Professor of Machine Learning, and Head of AI Research Engineering

I am a Professor of Machine Learning. I develop translational multimodal AI technologies for advancing healthcare and scientific discovery.