Learning Robust and Sparse Principal Components with the α-Divergence

Published in IEEE Transactions on Image Processing, 2024

Paper Code

In this paper, novel robust principal component analysis (RPCA) methods are proposed to exploit the local structure of datasets. The proposed methods are derived by minimizing the α-divergence between the sample distribution and the Gaussian density model. The α-divergence is used in different frameworks to represent variants of RPCA approaches including orthogonal, non-orthogonal, and sparse methods. We show that the classical PCA is a special case of our proposed methods where the α-divergence is reduced to the Kullback-Leibler (KL) divergence. It is shown in simulations that the proposed approaches recover the underlying principal components (PCs) by down-weighting the importance of structured and unstructured outliers. Furthermore, using simulated data, it is shown that the proposed methods can be applied to fMRI signal recovery and Foreground-Background (FB) separation in video analysis. Results on real world problems of FB separation as well as image reconstruction are also provided.