Channel-temporal attention for first-person video domain adaptation

Abstract

Unsupervised Domain Adaptation (UDA) can transfer knowledge from labeled source data to unlabeled target data of the same categories. However, UDA for first-person action recognition is an under-explored problem, with lack of datasets and limited consideration of first-person video characteristics. This paper focuses on addressing this problem. Firstly, we propose two small-scale first-person video domain adaptation datasets: ADLsmall and GTEA-KITCHEN. Secondly, we introduce channel-temporal attention blocks to capture the channel-wise and temporal-wise relationships and model their inter-dependencies important to first-person vision. Finally, we propose a Channel-Temporal Attention Network (CTAN) to integrate these blocks into existing architectures. CTAN outperforms baselines on the two proposed datasets and one existing dataset EPICcvpr20

Publication
arXiv preprint arXiv:2108.07846
Xianyuan Liu
Xianyuan Liu
Visiting PhD Student
Shuo Zhou
Shuo Zhou
Academic Fellow at University of Sheffield (past PhD Student)
Haiping Lu
Haiping Lu
Professor of Machine Learning, Head of AI Research Engineering, and Turing Academic Lead

I am a Professor of Machine Learning. I develop translational AI technologies for better analysing multimodal data in healthcare and beyond.