April  2020, 14(2): 233-265. doi: 10.3934/ipi.2020011

Guarantees of riemannian optimization for low rank matrix completion

1. 

School of Data Science, Fudan University, Shanghai, China

2. 

Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR, China

3. 

Office of the President, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia

* Corresponding author: Jian-Feng Cai

Received  January 2019 Revised  October 2019 Published  February 2020

Fund Project: The first author is supported by National Science Foundation of China (NSFC) 11801088 and Shanghai Sailing Program 18YF1401600. The second author is supported by Hong Kong Research Grant Council (HKRGC) General Research Fund (GRF) 16306317.

We establish the exact recovery guarantees for a class of Riemannian optimization methods based on the embedded manifold of low rank matrices for matrix completion. Assume
$ m $
entries of an
$ n\times n $
rank
$ r $
matrix are sampled independently and uniformly with replacement. We first show that with high probability the Riemannian gradient descent and conjugate gradient descent algorithms initialized by one step hard thresholding are guaranteed to converge linearly to the measured matrix provided
$ \begin{align*} m\geq C_\kappa n^{1.5}r\log^{1.5}(n), \end{align*} $
where
$ C_\kappa $
is a numerical constant depending on the condition number of the measured matrix. Then the sampling complexity is further improved to
$ \begin{align*} m\geq C_\kappa nr^2\log^{2}(n) \end{align*} $
via the resampled Riemannian gradient descent initialization. The analysis of the new initialization procedure relies on an asymmetric restricted isometry property of the sampling operator and the curvature of the low rank matrix manifold. Numerical simulation shows that the algorithms are able to recover a low rank matrix from nearly the minimum number of measurements.
Citation: Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems and Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011
References:
[1]

Y. Amit, M. Fink, N. Srebro and S. Ullman, Uncovering shared structures in multiclass classification, in: Proceedings of the 24th International Conference on Machine Learning, 2007, 17–24. doi: 10.1145/1273496.1273499.

[2]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning, Advances in Neural Information Processing Systems, 2007.

[3] P.-A. AbsilR. Mahony and R. Sepulchre, Optimization Algorithms on Matrix Manifolds,, Princeton University Press, 2008.  doi: 10.1515/9781400830244.
[4]

A. AhmedB. Recht and J. Romberg, Blind deconvolution using convex programming, IEEE Transactions on Information Theory, 60 (2014), 1711-1732.  doi: 10.1109/TIT.2013.2294644.

[5]

R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Transactions on Information Theory, 48 (2002), 569-579.  doi: 10.1109/18.985947.

[6]

S. BhojanapalliA. Kyrillidis and S. Sanghavi, Dropping convexity for faster semi-definite optimization, JMLR: Workshop and Conference Proceedings, 49 (2016), 1-53. 

[7]

J. BlanchardJ. Tanner and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.

[8]

N. Boumal and P.-A. Absil, RTRMC: A Riemannian trust-region method for low-rank matrix completion, in: Advances in Neural Information Processing Systems, 2011.

[9]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[10]

E. J. Candès and Y. Plan, Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.

[11]

Y. Chen and M. J. Wainwright, Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees, arXiv: 1509.03025, 2015.

[12]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56 (2009), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[13]

Y. Chen, Incoherence-optimal matrix completion, IEEE Transactions on Information Theory, 61 (2015), 2909-2923.  doi: 10.1109/TIT.2015.2415195.

[14]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[15]

E. J. CandèsY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences, 6 (2013), 199-225.  doi: 10.1137/110848074.

[16]

E. J. CandèsX. Li and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory, 61 (2015), 1985-2007.  doi: 10.1109/TIT.2015.2399924.

[17]

Y. Chen and E. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Communications on Pure and Applied Mathematics, 70 (2017), 822-883.  doi: 10.1002/cpa.21638.

[18]

L. Eldén, Matrix Methods in Data Mining and Pattern Recogonization, SIAM, 2007. doi: 10.1137/1.9780898718867.

[19]

M. Fazel, Matrix rank minimization with applications, ph. D. dissertation, Stanford University, 2002.

[20]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[21]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Foundations of Computational Mathematics, 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.

[22]

R. W. Gerchberg and W. O. Saxton, A practical algorithm for the determination of the phase from image and diffraction plane pictures, Optik, 35 (1972), 237-246. 

[23]

N. J. A. Harvey, D. R. Karger and S. Yekhanin, The complexity of matrix completion, in: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, 1103–1111. doi: 10.1145/1109557.1109679.

[24]

J. P. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Processing Letters, 16 (2009), 584-587.  doi: 10.1109/LSP.2009.2018223.

[25]

P. Jain, R. Meka and I. Dhillon, Guaranteed rank minimization via singular value projection, in: NIPS, 2010.

[26]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013,665–674. doi: 10.1145/2488608.2488693.

[27]

P. Jain and P. Netrapalli, Fast exact matrix completion with finite samples, JMLR: Workshop and Conference Proceedings, 40 (2015), 1-28. 

[28]

A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, Journal of Mathematical Imaging and Vision, 48 (2014), 235-265.  doi: 10.1007/s10851-013-0434-7.

[29]

R. H. Keshavan, Efficient algorithms for collaborative filtering, ph. D. dissertation, Stanford University, 2012.

[30]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Transactions on Information Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.

[31]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[32]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 128 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[33]

S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Transactions on Information Theory, 63 (2017), 4497-4520.  doi: 10.1109/TIT.2017.2701342.

[34]

B. Mishra, K. A. Apuroop and R. Sepulchre, A Riemannian geometry for low-rank matrix completion, arXiv 1306.2672, 2013.

[35]

B. MishraG. MeyerS. Bonnabel and R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Computational Statistics, 29 (2014), 591-621.  doi: 10.1007/s00180-013-0464-z.

[36]

T. Ngo and Y. Saad, Scaled Gradients on Grassmann Manifolds for Matrix Completion, in: Advances in Neural Information Processing Systems, 2012.

[37]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.

[38]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.

[39]

B. Recht, A simpler approach to matrix completion, The Journal of Machine Learning Research, 12 (2011), 3413-3430. 

[40]

C. D. Sa, K. Olukotun, C. Ré, Global convergence of stochastic gradient descent for some nonconvex matrix problems, in: ICML, 2015.

[41]

R. Sun and Z. Luo, Guaranteed matrix completion via non-convex factorization, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science–FOCS 2015, IEEE Computer Soc., Los Alamitos, CA, 2015,270–289.

[42]

J. SunQ. Qu and J. Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics, 18 (2018), 1131-1198.  doi: 10.1007/s10208-017-9365-9.

[43]

J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35 (2013), S104–S125. doi: 10.1137/120876459.

[44]

S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in: ICML, 2016.

[45]

J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Applied and Computational Harmonic Analysis, 40 (2016), 417-429.  doi: 10.1016/j.acha.2015.08.003.

[46]

L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, 38 (1996), 49-95.  doi: 10.1137/1038003.

[47]

B. Vandereycken, Low rank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), 1214-1236.  doi: 10.1137/110845768.

[48]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a non-linear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.

[49]

C. D. WhiteS. Sanghavi and R. Ward, The local convexity of solving systems of quadratic equations, Results in Mathematics, 71 (2017), 569-608.  doi: 10.1007/s00025-016-0564-5.

[50]

K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis nad Applications, 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.

[51]

K. Wei, Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study, Inverse Problems, 31 (2015), 125008, 23pp. doi: 10.1088/0266-5611/31/12/125008.

[52]

Q. Zheng and J. Lafferty, A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements, NIPS, 2015.

[53]

T. Zhao, Z. Wang and H. Liu, Nonconvex low rank matrix factorization via inexact first order oracle, NIPS, 2015.

[54]

Q. Zheng and J. Lafferty, Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent, 2016, arXiv: 1605.0705.

show all references

References:
[1]

Y. Amit, M. Fink, N. Srebro and S. Ullman, Uncovering shared structures in multiclass classification, in: Proceedings of the 24th International Conference on Machine Learning, 2007, 17–24. doi: 10.1145/1273496.1273499.

[2]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning, Advances in Neural Information Processing Systems, 2007.

[3] P.-A. AbsilR. Mahony and R. Sepulchre, Optimization Algorithms on Matrix Manifolds,, Princeton University Press, 2008.  doi: 10.1515/9781400830244.
[4]

A. AhmedB. Recht and J. Romberg, Blind deconvolution using convex programming, IEEE Transactions on Information Theory, 60 (2014), 1711-1732.  doi: 10.1109/TIT.2013.2294644.

[5]

R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Transactions on Information Theory, 48 (2002), 569-579.  doi: 10.1109/18.985947.

[6]

S. BhojanapalliA. Kyrillidis and S. Sanghavi, Dropping convexity for faster semi-definite optimization, JMLR: Workshop and Conference Proceedings, 49 (2016), 1-53. 

[7]

J. BlanchardJ. Tanner and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.

[8]

N. Boumal and P.-A. Absil, RTRMC: A Riemannian trust-region method for low-rank matrix completion, in: Advances in Neural Information Processing Systems, 2011.

[9]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[10]

E. J. Candès and Y. Plan, Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.

[11]

Y. Chen and M. J. Wainwright, Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees, arXiv: 1509.03025, 2015.

[12]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56 (2009), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[13]

Y. Chen, Incoherence-optimal matrix completion, IEEE Transactions on Information Theory, 61 (2015), 2909-2923.  doi: 10.1109/TIT.2015.2415195.

[14]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[15]

E. J. CandèsY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences, 6 (2013), 199-225.  doi: 10.1137/110848074.

[16]

E. J. CandèsX. Li and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory, 61 (2015), 1985-2007.  doi: 10.1109/TIT.2015.2399924.

[17]

Y. Chen and E. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Communications on Pure and Applied Mathematics, 70 (2017), 822-883.  doi: 10.1002/cpa.21638.

[18]

L. Eldén, Matrix Methods in Data Mining and Pattern Recogonization, SIAM, 2007. doi: 10.1137/1.9780898718867.

[19]

M. Fazel, Matrix rank minimization with applications, ph. D. dissertation, Stanford University, 2002.

[20]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[21]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Foundations of Computational Mathematics, 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.

[22]

R. W. Gerchberg and W. O. Saxton, A practical algorithm for the determination of the phase from image and diffraction plane pictures, Optik, 35 (1972), 237-246. 

[23]

N. J. A. Harvey, D. R. Karger and S. Yekhanin, The complexity of matrix completion, in: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, 1103–1111. doi: 10.1145/1109557.1109679.

[24]

J. P. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Processing Letters, 16 (2009), 584-587.  doi: 10.1109/LSP.2009.2018223.

[25]

P. Jain, R. Meka and I. Dhillon, Guaranteed rank minimization via singular value projection, in: NIPS, 2010.

[26]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013,665–674. doi: 10.1145/2488608.2488693.

[27]

P. Jain and P. Netrapalli, Fast exact matrix completion with finite samples, JMLR: Workshop and Conference Proceedings, 40 (2015), 1-28. 

[28]

A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, Journal of Mathematical Imaging and Vision, 48 (2014), 235-265.  doi: 10.1007/s10851-013-0434-7.

[29]

R. H. Keshavan, Efficient algorithms for collaborative filtering, ph. D. dissertation, Stanford University, 2012.

[30]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Transactions on Information Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.

[31]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[32]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 128 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[33]

S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Transactions on Information Theory, 63 (2017), 4497-4520.  doi: 10.1109/TIT.2017.2701342.

[34]

B. Mishra, K. A. Apuroop and R. Sepulchre, A Riemannian geometry for low-rank matrix completion, arXiv 1306.2672, 2013.

[35]

B. MishraG. MeyerS. Bonnabel and R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Computational Statistics, 29 (2014), 591-621.  doi: 10.1007/s00180-013-0464-z.

[36]

T. Ngo and Y. Saad, Scaled Gradients on Grassmann Manifolds for Matrix Completion, in: Advances in Neural Information Processing Systems, 2012.

[37]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.

[38]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.

[39]

B. Recht, A simpler approach to matrix completion, The Journal of Machine Learning Research, 12 (2011), 3413-3430. 

[40]

C. D. Sa, K. Olukotun, C. Ré, Global convergence of stochastic gradient descent for some nonconvex matrix problems, in: ICML, 2015.

[41]

R. Sun and Z. Luo, Guaranteed matrix completion via non-convex factorization, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science–FOCS 2015, IEEE Computer Soc., Los Alamitos, CA, 2015,270–289.

[42]

J. SunQ. Qu and J. Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics, 18 (2018), 1131-1198.  doi: 10.1007/s10208-017-9365-9.

[43]

J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35 (2013), S104–S125. doi: 10.1137/120876459.

[44]

S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in: ICML, 2016.

[45]

J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Applied and Computational Harmonic Analysis, 40 (2016), 417-429.  doi: 10.1016/j.acha.2015.08.003.

[46]

L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, 38 (1996), 49-95.  doi: 10.1137/1038003.

[47]

B. Vandereycken, Low rank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), 1214-1236.  doi: 10.1137/110845768.

[48]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a non-linear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.

[49]

C. D. WhiteS. Sanghavi and R. Ward, The local convexity of solving systems of quadratic equations, Results in Mathematics, 71 (2017), 569-608.  doi: 10.1007/s00025-016-0564-5.

[50]

K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis nad Applications, 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.

[51]

K. Wei, Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study, Inverse Problems, 31 (2015), 125008, 23pp. doi: 10.1088/0266-5611/31/12/125008.

[52]

Q. Zheng and J. Lafferty, A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements, NIPS, 2015.

[53]

T. Zhao, Z. Wang and H. Liu, Nonconvex low rank matrix factorization via inexact first order oracle, NIPS, 2015.

[54]

Q. Zheng and J. Lafferty, Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent, 2016, arXiv: 1605.0705.

Figure 1.  Geometric comparison between NIHT (a) and RGrad (b). One more projection is introduced in RGrad
Figure 2.  Empirical phase transition curves (a) RGrad, (b) RCG, (c) RCG restarted and (d) GD when $ n = 800 $. Horizontal axis $ p = m/n^2 $ and vertical axis $ q = (2n-r)r/m $. White denotes successful recovery in all ten random tests, and black denotes failure in all tests
Figure 3.  Relative residual (mean and standard deviation over ten random tests) as function of number of iterations for $ n = 8000 $, $ r = 100 $, $ 1/ q = 2 $ (a) and $ 1/ q = 3 $ (b). The values after each algorithm are the average computational time (seconds) for convergence
Figure 4.  Performance of (a) RGrad, (b) RCG and (c) RCG restarted under different SNR
Table 1.  Comparison of RGrad/RCG and GD on sampling complexity (SC), required assumptions (RA), per-iteration computational cost (PICC), and local convergence rate (LCR). Remarks: 1) (Ⅰ) represents RGD/RCG with the one step hard thresholding initialization and (Ⅱ) represents RGD/RCG with the resampled Riemannian gradient descent initialization; 2) $v_g$ and $v_{cg}$ are both absolute numerical constants which are less than one
Algorithm SC RA PICC LCR
RGrad, RCG (I) $O(\kappa n^{3/2}r\log^{3/2}(n))$$\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
RGrad, RCG (II) $O(\kappa^6nr^2\log^2(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
GD [54] $O(\kappa^2nr^2\log(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr)$ $(1-\frac{C}{\mu^2\kappa^2r^2})^{1/2}$
Algorithm SC RA PICC LCR
RGrad, RCG (I) $O(\kappa n^{3/2}r\log^{3/2}(n))$$\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
RGrad, RCG (II) $O(\kappa^6nr^2\log^2(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
GD [54] $O(\kappa^2nr^2\log(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr)$ $(1-\frac{C}{\mu^2\kappa^2r^2})^{1/2}$
[1]

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

[2]

Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial and Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739

[3]

El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial and Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883

[4]

Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial and Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565

[5]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems and Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[6]

Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001

[7]

Saman Babaie–Kafaki, Reza Ghanbari. A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update. Journal of Industrial and Management Optimization, 2017, 13 (2) : 649-658. doi: 10.3934/jimo.2016038

[8]

Xiaming Chen. Kernel-based online gradient descent using distributed approach. Mathematical Foundations of Computing, 2019, 2 (1) : 1-9. doi: 10.3934/mfc.2019001

[9]

Ting Hu. Kernel-based maximum correntropy criterion with gradient descent method. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4159-4177. doi: 10.3934/cpaa.2020186

[10]

Yacine Chitour, Zhenyu Liao, Romain Couillet. A geometric approach of gradient descent algorithms in linear neural networks. Mathematical Control and Related Fields, 2022  doi: 10.3934/mcrf.2022021

[11]

Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete and Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934/dcdsb.2011.16.1157

[12]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[13]

Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial and Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034

[14]

Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control and Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289

[15]

Henry Adams, Lara Kassab, Deanna Needell. An adaptation for iterative structured matrix completion. Foundations of Data Science, 2021, 3 (4) : 769-791. doi: 10.3934/fods.2021028

[16]

Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115

[17]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[18]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems and Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[19]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems and Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[20]

Huiyuan Guo, Quan Yu, Xinzhen Zhang, Lulu Cheng. Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022045

2020 Impact Factor: 1.639

Metrics

  • PDF downloads (520)
  • HTML views (222)
  • Cited by (0)

Other articles
by authors

[Back to Top]