Eigenvectors from eigenvalues sparse principal component analysis
Speaker: Rob Frost (Dartmouth, Geisel)
Date: 2/14/23
Abstract: We present a novel technique for sparse principal component analysis. This method, named eigenvectors from eigenvalues sparse principal component analysis (EESPCA), is based on the formula for computing squared eigenvector loadings of a Hermitian matrix from the eigenvalues of the full matrix and associated sub-matrices. We explore two versions of the EESPCA method: a version that uses a fixed threshold for inducing sparsity and a version that selects the threshold via cross-validation. Relative to the state-of-the-art sparse PCA methods of Witten et al., Yuan and Zhang, and Tan et al., the fixed threshold EESPCA technique offers an order-of-magnitude improvement in computational speed, does not require estimation of tuning parameters via cross-validation, and can more accurately identify true zero principal component loadings across a range of data matrix sizes and covariance structures. Importantly, the EESPCA method achieves these benefits while maintaining out-of-sample reconstruction error and PC estimation error close to the lowest error generated by all evaluated approaches. EESPCA is a practical and effective technique for sparse PCA with particular relevance to computationally demanding statistical problems such as the analysis of high-dimensional datasets or application of statistical techniques like resampling that involve the repeated calculation of sparse PCs. Paper link: https://doi.org/10.1080/10618600.2021.1987254.