Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning

Abstract

This paper proposes an uncorrelated multilinear principal component analysis (UMPCA) algorithm for unsupervised subspace learning of tensorial data. It should be viewed as a multilinear extension of the classical principal component analysis (PCA) framework. Through successive variance maximization, UMPCA seeks a tensor-to-vector projection (TVP) that captures most of the variation in the original tensorial input while producing uncorrelated features. The solution consists of sequential iterative steps based on the alternating projection method. In addition to deriving the UMPCA framework, this work offers a way to systematically determine the maximum number of uncorrelated multilinear features that can be extracted by the method. UMPCA is compared against the baseline PCA solution and its five state-of-the-art multilinear extensions, namely two-dimensional PCA (2DPCA), concurrent subspaces analysis (CSA), tensor rank-one decomposition (TROD), generalized PCA (GPCA), and multilinear PCA (MPCA), on the tasks of unsupervised face and gait recognition. Experimental results included in this paper suggest that UMPCA is particularly effective in determining the low-dimensional projection space needed in such recognition tasks.

Publication
IEEE Transactions on Neural Networks
Haiping Lu
Haiping Lu
Professor of Machine Learning, Head of AI Research Engineering, and Turing Academic Lead

I am a Professor of Machine Learning. I develop translational AI technologies for better analysing multimodal data in healthcare and beyond.