\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Principal component analysis with drop rank covariance matrix

  • * Corresponding author: Bingo Wing-Kuen Ling

    * Corresponding author: Bingo Wing-Kuen Ling
Abstract Full Text(HTML) Figure(1) Related Papers Cited by
  • This paper considers the principal component analysis when the covariance matrix of the input vectors drops rank. This case sometimes happens when the total number of the input vectors is very limited. First, it is found that the eigen decomposition of the covariance matrix is not uniquely defined. This implies that different transform matrices could be obtained for performing the principal component analysis. Hence, the generalized form of the eigen decomposition of the covariance matrix is given. Also, it is found that the matrix with its columns being the eigenvectors of the covariance matrix is not necessary to be unitary. This implies that the transform for performing the principal component analysis may not be energy preserved. To address this issue, the necessary and sufficient condition for the matrix with its columns being the eigenvectors of the covariance matrix to be unitary is derived. Moreover, since the design of the unitary transform matrix for performing the principal component analysis is usually formulated as an optimization problem, the necessary and sufficient condition for the first order derivative of the Lagrange function to be equal to the zero vector is derived. In fact, the unitary matrix with its columns being the eigenvectors of the covariance matrix is only a particular case of the condition. Furthermore, the necessary and sufficient condition for the second order derivative of the Lagrange function to be a positive definite function is derived. It is found that the unitary matrix with its columns being the eigenvectors of the covariance matrix does not satisfy this condition. Computer numerical simulation results are given to valid the results.

    Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Plots of the feature values. (a) Sample I.D.: 1-5. (b) Sample I.D.: 6-10. (c) Sample I.D.: 11-15. (d) Sample I.D.: 16-20

  • [1] J. DehaeneM. Moonen and J. Vandewalle, An improved stochastic gradient algorithm for principal component analysis and subspace tracking, IEEE Transactions on Signal Processing, 45 (1997), 2582-2586. 
    [2] C. L. Fancourt and J. C. Principe, Competitive principal component analysis for locally stationary time series, IEEE Transactions on Signal Processing, 46 (1998), 3068-3081.  doi: 10.1109/78.726819.
    [3] J. B. O. S. Filho and P. S. R. Diniz, A fixed-point online kernel principal component extraction algorithm, IEEE Transactions on Signal Processing, 65 (2017), 6244-6259.  doi: 10.1109/TSP.2017.2750119.
    [4] I. A. Guimarães and A. C. Neto, Estimation in polytomous logistic model: Comparison of methods, Journal of Industrial and Management Optimization, Journal of Industrial and Management Optimization, 5 (2009), 239-252.  doi: 10.3934/jimo.2009.5.239.
    [5] R. HeB. G. HuW. S. Zheng and X. W. Kong, Robust principal component analysis based on maximum correntropy criterion, IEEE Transactions on Image Processing, 20 (2011), 1485-1494.  doi: 10.1109/TIP.2010.2103949.
    [6] S. M. Huang and J. F. Yang, Improved principal component regression for face recognition under illumination variations, IEEE Signal Processing Letters, 19 (2012), 179-182.  doi: 10.1109/LSP.2012.2185492.
    [7] J. Kang, X. Lin and G. Yang, Research of multi-scale PCA algorithm for face recognition, International Conference on Information and Communications Technologies, ICT, (2015), 1–5.
    [8] M. S. KangJ. H. BaeB. S. Kang and K. T. Kim, ISAR cross-range scaling using iterative processing via principal component analysis and bisection algorithm, IEEE Transactions on Signal Processing, 64 (2016), 3909-3918.  doi: 10.1109/TSP.2016.2552511.
    [9] S. KaramizadehS. M. AbdullahA. A. Manaf and M. Zamani, An overview of principal component analysis, Journal of Signal and Information Processing, 4 (2013), 173-175.  doi: 10.4236/jsip.2013.43B031.
    [10] H. C. LiuS. T. Li and L. Y. Fang, Robust object tracking based on principal component analysis and local sparse representation, IEEE Transactions on Instrumentation and Measurement, 64 (2015), 2863-2875. 
    [11] Z. Li, Q. L. Ye, Y. T. Guo, Z. K. Tian, B. W. K. Ling and R. W. K. Lam, Wearable non-invasive blood glucose estimation via empirical mode decomposition based hierarchical multiresolution analysis and random forest, 23rd IEEE International Conference on Digital Signal Processing, DSP, (2018), 1–5. doi: 10.1109/ICDSP.2018.8631545.
    [12] A. MoradiJ. RazmiR. Babazadeh and A. Sabbaghnia, An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty, Journal of Industrial and Management Optimization, 15 (2019), 855-879.  doi: 10.3934/jimo.2018074.
    [13] E. M. Moreno, Non-invasive estimate of blood glucose and blood pressure from a photoplethysmography by means of machine learning techniques, Artificial Intelligence in Medicine, 53 (2011), 127-138. 
    [14] P. P. MarkopulosG. N. Karystinos and D. A. Pados, Optimal algorithms for L1-subpace signal processing, IEEE Transactions on Signal Processing, 62 (2014), 5046-5058.  doi: 10.1109/TSP.2014.2338077.
    [15] O. Ozgonenel and T. Yalcin, Principal component analysis (PCA) based neural network for motor protection, 10th IET International Conference on Developments in Power System Protection, DPSP, (2010), 1–5. doi: 10.1049/cp.2010.0252.
    [16] S. Ouyang and Z. Bao, Fast principal component extraction by a weighted information criterion, IEEE Transactions on Signal Processing, 50 (2002), 1994-2002.  doi: 10.1109/TSP.2002.800395.
    [17] Y. M. RenL. Liao and S. J. Maybank, Hyperspectral image spectral-spatial feature extraction via tensor principal component analysis, IEEE Geoscience and Remote Sensing Letters, 14 (2017), 1431-1435.  doi: 10.1109/LGRS.2017.2686878.
    [18] C. E. Thomaz and G. A. Giraldi, A new ranking method for principal components analysis and its application to face image analysis, Image and Vision Computing, 28 (2009), 902-913.  doi: 10.1016/j.imavis.2009.11.005.
    [19] G. Tang and A. Nehorai, Constrained Cramér-Rao bound on robust principal component analysis, IEEE Transactions on Signal Processing, 59 (2011), 5070-5076.  doi: 10.1109/TSP.2011.2161984.
    [20] X. C XiuY. YangW.Q. LiuL.C. Kong and M.J. Shang, An improved total variation regularized RPCA for moving object detection with dynamic background, Journal of Industrial and Management Optimization, 13 (2017), 1-14.  doi: 10.3934/jimo.2019024.
    [21] S. Y. YiZ. H. LaiZ. Y. HeY. Cheung and Y. Liu, Joint sparse principal component analysis, Pattern Recognition, 61 (2016), 524-536.  doi: 10.1016/j.patcog.2016.08.025.
    [22] A. ZareA. OzdemirM. A. Iwen and S. Aviyente, Extension of PCA to higher order data structures: An introduction to tensors, tensor decompositions, and tensor PCA, Proceedings of the IEEE, 106 (2018), 1341-1358.  doi: 10.1109/JPROC.2018.2848209.
    [23] H. Zou and L. Xue, A selective overview of sparse principal component analysis, Proceedings of the IEEE, 106 (2018), 1311-1320.  doi: 10.1109/JPROC.2018.2846588.
  • 加载中

Figures(1)

SHARE

Article Metrics

HTML views(1240) PDF downloads(386) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return