• Previous Article
    A new iterative identification method for damping control of power system in multi-interference
  • DCDS-S Home
  • This Issue
  • Next Article
    Penalty method-based equilibrium point approach for solving the linear bilevel multiobjective programming problem
June  2020, 13(6): 1757-1772. doi: 10.3934/dcdss.2020103

An alternating minimization method for matrix completion problems

1. 

School of Applied Mathematics, Nanjing University Of Finance & Economics, China

2. 

State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, China

* Corresponding author

Received  September 2018 Revised  November 2018 Published  September 2019

Fund Project: The first author is supported by NSFC grants 11401295 and 11726618, and by Major Program of the National Social Science Foundation of China under Grant 12 & ZD114 and by National Social Science Foundation of China under Grant 15BGL158, 17BTQ063 and by Qinglan Project of Jiangsu Province, and Social Science Foundation of Jiangsu Province under Grant 18GLA002. The second author is supported by NSFC grants 11622112, 11471325, 91530204 and 11688101, the National Center for Mathematics and Interdisciplinary Sciences, CAS, and Key Research Program of Frontier Sciences QYZDJ-SSW-SYS010, CAS

Matrix completion problems have applications in various domains such as information theory, statistics, engineering, etc. Meanwhile, solving matrix completion problems is not a easy task since the nonconvex and nonsmooth rank operation is involved. Existing approaches can be categorized into two classes. The first ones use nuclear norm to take the place of rank operation, and any convex optimization algorithms can be used to solve the reformulated problem. The limitation of this class of approaches is singular value decomposition (SVD) is involved to tackle the nuclear norm which significantly increases the computational cost. The other ones factorize the target matrix by two slim matrices. Fast algorithms for solving the reformulated nonconvex optimization problem usually lack of global convergence, meanwhile convergence guaranteed algorithms require restricted stepsize. In this paper, we consider the matrix factorization model for matrix completion problems, and propose an alternating minimization method for solving it. The global convergence to a stationary point or local minimizer is guaranteed under mild conditions. We compare the proposed algorithm with some state-of-the-art algorithms in solving a bunch of testing problems. The numerical results illustrate the efficiency and great potential of our algorithm.

Citation: Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete and Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103
References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936. 

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009).

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020.

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009.

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078. 

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009.

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01.

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009.

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009).

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867. 

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. 

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640. 

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154. 

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010.

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180. 

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009.

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.

show all references

References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936. 

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009).

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020.

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009.

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078. 

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009.

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01.

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009.

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009).

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867. 

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. 

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640. 

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154. 

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010.

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180. 

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009.

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.

Figure 1.  Relative error vs iteration number, $ m $ = $ n $ = 2000
Figure 2.  Computing time vs sampling ratio, $ m $ = $ n $ = 2000
Figure 3.  Computing time vs dimension
Table 4.1.  Results of speed performance test for synthetic data with $m$ = $n$ = $2000$
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
[1]

Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems and Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011

[2]

Huiyuan Guo, Quan Yu, Xinzhen Zhang, Lulu Cheng. Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022045

[3]

Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems and Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001

[4]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control and Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[5]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems and Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[6]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[7]

Meijuan Shang, Yanan Liu, Lingchen Kong, Xianchao Xiu, Ying Yang. Nonconvex mixed matrix minimization. Mathematical Foundations of Computing, 2019, 2 (2) : 107-126. doi: 10.3934/mfc.2019009

[8]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems and Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[9]

Yu-Ning Yang, Su Zhang. On linear convergence of projected gradient method for a class of affine rank minimization problems. Journal of Industrial and Management Optimization, 2016, 12 (4) : 1507-1519. doi: 10.3934/jimo.2016.12.1507

[10]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control and Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[11]

Simon Arridge, Pascal Fernsel, Andreas Hauptmann. Joint reconstruction and low-rank decomposition for dynamic inverse problems. Inverse Problems and Imaging, 2022, 16 (3) : 483-523. doi: 10.3934/ipi.2021059

[12]

Yonggui Zhu, Yuying Shi, Bin Zhang, Xinyan Yu. Weighted-average alternating minimization method for magnetic resonance image reconstruction based on compressive sensing. Inverse Problems and Imaging, 2014, 8 (3) : 925-937. doi: 10.3934/ipi.2014.8.925

[13]

Giuseppe Geymonat, Françoise Krasucki. Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains. Communications on Pure and Applied Analysis, 2009, 8 (1) : 295-309. doi: 10.3934/cpaa.2009.8.295

[14]

Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations and Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934/eect.2019034

[15]

Vladimir Gaitsgory, Tanya Tarnopolskaya. Threshold value of the penalty parameter in the minimization of $L_1$-penalized conditional value-at-risk. Journal of Industrial and Management Optimization, 2013, 9 (1) : 191-204. doi: 10.3934/jimo.2013.9.191

[16]

Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial and Management Optimization, 2020, 16 (1) : 397-407. doi: 10.3934/jimo.2018159

[17]

Jiying Liu, Jubo Zhu, Fengxia Yan, Zenghui Zhang. Compressive sampling and $l_1$ minimization for SAR imaging with low sampling rate. Inverse Problems and Imaging, 2013, 7 (4) : 1295-1305. doi: 10.3934/ipi.2013.7.1295

[18]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems and Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[19]

Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete and Continuous Dynamical Systems, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249

[20]

Henry Adams, Lara Kassab, Deanna Needell. An adaptation for iterative structured matrix completion. Foundations of Data Science, 2021, 3 (4) : 769-791. doi: 10.3934/fods.2021028

2020 Impact Factor: 2.425

Metrics

  • PDF downloads (221)
  • HTML views (349)
  • Cited by (0)

Other articles
by authors

[Back to Top]