December  2021, 3(4): 769-791. doi: 10.3934/fods.2021028

An adaptation for iterative structured matrix completion

1. 

Department of Mathematics, Colorado State University, Fort Collins, CO 80523, USA

2. 

Department of Mathematics, University of California, Los Angeles CA 90095, USA

* Corresponding author: Lara Kassab

Received  May 2021 Revised  August 2021 Published  December 2021 Early access  November 2021

Fund Project: Needell was partially supported by NSF CAREER #1348721 and NSF BIGDATA #1740325

The task of predicting missing entries of a matrix, from a subset of known entries, is known as matrix completion. In today's data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matrix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account sparsity-based structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size $ 1000 \times 1000 $.

Citation: Henry Adams, Lara Kassab, Deanna Needell. An adaptation for iterative structured matrix completion. Foundations of Data Science, 2021, 3 (4) : 769-791. doi: 10.3934/fods.2021028
References:
[1]

H. Adams, L. Kassab and D. Needell, Structured IRLS code, https://github.com/lara-kassab/structured-matrix-completion-IRLS.git.

[2]

R. M. Bell and Y. Koren, Lessons from the Netflix prize challenge, SiGKDD Explorations, 9 (2007), 75-79.  doi: 10.1145/1345448.1345465.

[3]

J. Bennett and S. Lanning, The Netflix prize, in Proceedings of KDD Cup and Workshop, vol. 2007, New York, NY, USA, (2007), 35.

[4]

P. BiswasT.-C. LianT.-C. Wang and Y. Ye, Semidefinite programming based algorithms for sensor network localization, ACM Transactions on Sensor Networks (TOSN), 2 (2006), 188-220. 

[5]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[6]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), Art. 11, 37 pp. doi: 10.1145/1970392.1970395.

[7]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936. 

[8]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[9]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.

[10]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[11]

V. ChandrasekaranS. SanghaviP. A. Parrilo and A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21 (2011), 572-596.  doi: 10.1137/090761793.

[12]

Y. Chen, S. Bhojanapalli, S. Sanghavi and R. Ward, Coherent matrix completion, in International Conference on Machine Learning, PMLR, (2014), 674–682.

[13]

Y. ChenS. BhojanapalliS. Sanghavi and R. Ward, Completing any low-rank matrix, provably, J. Mach. Learn. Res., 16 (2015), 2999-3034. 

[14]

Y. ChenA. JalaliS. Sanghavi and C. Caramanis, Low-rank matrix recovery from errors and erasures, IEEE Transactions on Information Theory, 59 (2013), 4324-4337. 

[15]

I. DaubechiesR. DeVoreM. Fornasier and C. S. Güntürk, Iteratively reweighted least squares minimization for sparse recovery, Comm. Pure Appl. Math., 63 (2010), 1-38.  doi: 10.1002/cpa.20303.

[16]

D. L. Donoho and P. B. Stark, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 49 (1989), 906-931.  doi: 10.1137/0149053.

[17]

M. Fazel, Matrix Rank Minimization with Applications, PhD thesis, Stanford University, 2002.

[18]

M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, in Proceedings of the 2003 American Control Conference, 2003, vol. 3, IEEE, (2003), 2156–2162. doi: 10.1109/ACC.2003.1243393.

[19]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640.  doi: 10.1137/100811404.

[20]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Found. Comput. Math., 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.

[21]

M. C. Grant and S. P. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (eds. V. Blondel, S. Boyd and H. Kimura), Lecture Notes in Control and Information Sciences, Springer-Verlag Limited, (2008), 95–110. doi: 10.1007/978-1-84800-155-8_7.

[22]

M. Grant and S. Boyd, CVX: Matlab Software for Disciplined Convex Programming, version 2.1, http://cvxr.com/cvx, 2014.

[23]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[24]

D. GrossY.-K. LiuS. T. FlammiaS. Becker and J. Eisert, Quantum state tomography via compressed sensing, Phys. Rev. Lett., 105 (2010), 150401.  doi: 10.1103/PhysRevLett.105.150401.

[25]

N. HalkoP.-G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.

[26]

P. Jain, R. Meka and I. S. Dhillon, Guaranteed rank minimization via singular value projection, in Advances in Neural Information Processing Systems, (2010), 937–945.

[27]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in Proceedings of the 2013 ACM Symposium on Theory of Computing, (2013), 665–674. doi: 10.1145/2488608.2488693.

[28]

R. H. Keshavan and S. Oh, A gradient descent algorithm on the Grassman manifold for matrix completion, arXiv preprint. arXiv: 0910.5260.

[29]

Y. Koren, R. Bell and C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 30–37.

[30]

C. Kümmerle and J. Sigl, Harmonic mean iteratively reweighted least squares for low-rank matrix recovery, J. Mach. Learn. Res., 19 (2018), 1815-1863. 

[31]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory, 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[32]

N. LinialE. London and Y. Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica, 15 (1995), 215-245.  doi: 10.1007/BF01200757.

[33]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[34]

S. MaD. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.

[35]

K. Mohan and M. Fazel, IRLS code, https://faculty.washington.edu/mfazel/, [Online; accessed 01-Aug-2019].

[36]

K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in Proceedings of the 2010 American Control Conference, IEEE, (2010), 2953–2959. doi: 10.1109/ACC.2010.5531594.

[37]

K. Mohan and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473. 

[38]

D. Molitor and D. Needell, Matrix completion for structured observations, in 2018 Information Theory and Applications Workshop (ITA), IEEE, (2018), 1–5. doi: 10.1109/ITA.2018.8503240.

[39]

I. Razenshteyn, Z. Song and D. P. Woodruff, Weighted low rank approximations with provable guarantees, in Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, (2016), 250–263.

[40]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. 

[41]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.

[42]

G. RobinO. KloppJ. JosseÉ. Moulines and R. Tibshirani, Main effects and interactions in mixed and incomplete data frames, J. Amer. Statist. Assoc., 15 (2020), 1292-1303.  doi: 10.1080/01621459.2019.1623041.

[43]

A. A. RontogiannisP. V. Giampouras and K. D. Koutroumbas, Online reweighted least squares robust PCA, IEEE Signal Processing Letters, 27 (2020), 1340-1344. 

[44]

R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34 (1986), 276-280.  doi: 10.1109/TAP.1986.1143830.

[45]

T. Schnabel, A. Swaminathan, A. Singh, N. Chandak and T. Joachims, Recommendations as treatments: Debiasing learning and evaluation, arXiv preprint. arXiv: 1602.05352.

[46]

A. Singer, A remark on global positioning from local distances, Proc. Natl. Acad. Sci. USA, 105 (2008), 9507-9511.  doi: 10.1073/pnas.0709842104.

[47]

A. M.-C. So and Y. Ye, Theory of semidefinite programming for sensor network localization, Math. Program., 109 (2007), 367-384.  doi: 10.1007/s10107-006-0040-1.

[48]

A. SportisseC. Boyer and J. Josse, Imputation and low-rank estimation with missing not at random data, Stat. Comput., 30 (2020), 1629-1643.  doi: 10.1007/s11222-020-09963-5.

[49]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pac. J. Optim., 6 (2010), 615-640. 

show all references

References:
[1]

H. Adams, L. Kassab and D. Needell, Structured IRLS code, https://github.com/lara-kassab/structured-matrix-completion-IRLS.git.

[2]

R. M. Bell and Y. Koren, Lessons from the Netflix prize challenge, SiGKDD Explorations, 9 (2007), 75-79.  doi: 10.1145/1345448.1345465.

[3]

J. Bennett and S. Lanning, The Netflix prize, in Proceedings of KDD Cup and Workshop, vol. 2007, New York, NY, USA, (2007), 35.

[4]

P. BiswasT.-C. LianT.-C. Wang and Y. Ye, Semidefinite programming based algorithms for sensor network localization, ACM Transactions on Sensor Networks (TOSN), 2 (2006), 188-220. 

[5]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[6]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), Art. 11, 37 pp. doi: 10.1145/1970392.1970395.

[7]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936. 

[8]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.

[9]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.

[10]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.

[11]

V. ChandrasekaranS. SanghaviP. A. Parrilo and A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21 (2011), 572-596.  doi: 10.1137/090761793.

[12]

Y. Chen, S. Bhojanapalli, S. Sanghavi and R. Ward, Coherent matrix completion, in International Conference on Machine Learning, PMLR, (2014), 674–682.

[13]

Y. ChenS. BhojanapalliS. Sanghavi and R. Ward, Completing any low-rank matrix, provably, J. Mach. Learn. Res., 16 (2015), 2999-3034. 

[14]

Y. ChenA. JalaliS. Sanghavi and C. Caramanis, Low-rank matrix recovery from errors and erasures, IEEE Transactions on Information Theory, 59 (2013), 4324-4337. 

[15]

I. DaubechiesR. DeVoreM. Fornasier and C. S. Güntürk, Iteratively reweighted least squares minimization for sparse recovery, Comm. Pure Appl. Math., 63 (2010), 1-38.  doi: 10.1002/cpa.20303.

[16]

D. L. Donoho and P. B. Stark, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 49 (1989), 906-931.  doi: 10.1137/0149053.

[17]

M. Fazel, Matrix Rank Minimization with Applications, PhD thesis, Stanford University, 2002.

[18]

M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, in Proceedings of the 2003 American Control Conference, 2003, vol. 3, IEEE, (2003), 2156–2162. doi: 10.1109/ACC.2003.1243393.

[19]

M. FornasierH. Rauhut and R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21 (2011), 1614-1640.  doi: 10.1137/100811404.

[20]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Found. Comput. Math., 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.

[21]

M. C. Grant and S. P. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (eds. V. Blondel, S. Boyd and H. Kimura), Lecture Notes in Control and Information Sciences, Springer-Verlag Limited, (2008), 95–110. doi: 10.1007/978-1-84800-155-8_7.

[22]

M. Grant and S. Boyd, CVX: Matlab Software for Disciplined Convex Programming, version 2.1, http://cvxr.com/cvx, 2014.

[23]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[24]

D. GrossY.-K. LiuS. T. FlammiaS. Becker and J. Eisert, Quantum state tomography via compressed sensing, Phys. Rev. Lett., 105 (2010), 150401.  doi: 10.1103/PhysRevLett.105.150401.

[25]

N. HalkoP.-G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.

[26]

P. Jain, R. Meka and I. S. Dhillon, Guaranteed rank minimization via singular value projection, in Advances in Neural Information Processing Systems, (2010), 937–945.

[27]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in Proceedings of the 2013 ACM Symposium on Theory of Computing, (2013), 665–674. doi: 10.1145/2488608.2488693.

[28]

R. H. Keshavan and S. Oh, A gradient descent algorithm on the Grassman manifold for matrix completion, arXiv preprint. arXiv: 0910.5260.

[29]

Y. Koren, R. Bell and C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 30–37.

[30]

C. Kümmerle and J. Sigl, Harmonic mean iteratively reweighted least squares for low-rank matrix recovery, J. Mach. Learn. Res., 19 (2018), 1815-1863. 

[31]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory, 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.

[32]

N. LinialE. London and Y. Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica, 15 (1995), 215-245.  doi: 10.1007/BF01200757.

[33]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31 (2009), 1235-1256.  doi: 10.1137/090755436.

[34]

S. MaD. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.

[35]

K. Mohan and M. Fazel, IRLS code, https://faculty.washington.edu/mfazel/, [Online; accessed 01-Aug-2019].

[36]

K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in Proceedings of the 2010 American Control Conference, IEEE, (2010), 2953–2959. doi: 10.1109/ACC.2010.5531594.

[37]

K. Mohan and M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13 (2012), 3441-3473. 

[38]

D. Molitor and D. Needell, Matrix completion for structured observations, in 2018 Information Theory and Applications Workshop (ITA), IEEE, (2018), 1–5. doi: 10.1109/ITA.2018.8503240.

[39]

I. Razenshteyn, Z. Song and D. P. Woodruff, Weighted low rank approximations with provable guarantees, in Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, (2016), 250–263.

[40]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. 

[41]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.

[42]

G. RobinO. KloppJ. JosseÉ. Moulines and R. Tibshirani, Main effects and interactions in mixed and incomplete data frames, J. Amer. Statist. Assoc., 15 (2020), 1292-1303.  doi: 10.1080/01621459.2019.1623041.

[43]

A. A. RontogiannisP. V. Giampouras and K. D. Koutroumbas, Online reweighted least squares robust PCA, IEEE Signal Processing Letters, 27 (2020), 1340-1344. 

[44]

R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34 (1986), 276-280.  doi: 10.1109/TAP.1986.1143830.

[45]

T. Schnabel, A. Swaminathan, A. Singh, N. Chandak and T. Joachims, Recommendations as treatments: Debiasing learning and evaluation, arXiv preprint. arXiv: 1602.05352.

[46]

A. Singer, A remark on global positioning from local distances, Proc. Natl. Acad. Sci. USA, 105 (2008), 9507-9511.  doi: 10.1073/pnas.0709842104.

[47]

A. M.-C. So and Y. Ye, Theory of semidefinite programming for sensor network localization, Math. Program., 109 (2007), 367-384.  doi: 10.1007/s10107-006-0040-1.

[48]

A. SportisseC. Boyer and J. Josse, Imputation and low-rank estimation with missing not at random data, Stat. Comput., 30 (2020), 1629-1643.  doi: 10.1007/s11222-020-09963-5.

[49]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pac. J. Optim., 6 (2010), 615-640. 

Figure 1.  We consider twenty $ 1000 \times 1000 $ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $ \|M - \tilde X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \tilde X\|_F $, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
Figure 2.  We consider twenty $ 500 \times 500 $ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $ \|M - \tilde X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \tilde X\|_F $, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
Figure 3.  We consider twenty $ 100 \times 100 $ sparse random matrices of rank 10 with average density equal to 0.66. The upper plots display (left) the average relative error for sIRLS $ \|M - \tilde X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \tilde X\|_F $, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise
Figure 4.  We consider twenty $ 100 \times 100 $ sparse random matrices of rank 8 with density equal to 0.58, but we do not input any rank guess. The upper plots display (left) the average relative error for sIRLS $ \|M - \tilde X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \tilde X\|_F $, and (right) the average ratio when the sampling rate of non-zero entries is at least 0.70 (a zoomed in version of part of the lower left plot)
Figure 5.  We consider twenty $ 100 \times 100 $ sparse random matrices of rank 20 with average density equal to 0.88. The upper plots display (left) the average relative error for sIRLS $ \|M - \tilde X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \tilde X\|_F $, and (right) the binned average ratio where white means the average ratio is strictly less than 1, and black otherwise. The red line separates the hard cases from those that are not: all the cases below the red line are hard
Figure 6.  We consider twenty $ 30 \times 30 $ sparse random matrices of rank 7, with average density equal to 0.53. We provide Structured sIRLS with the rank of the matrices. We optimize for each matrix and combination of sampling rates the regularization parameter $ \alpha \in \{ 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}\} $ for Structured NNM and report the "optimal" results. The upper plots display (left) the average relative error for Structured NNM $ \|M - \bar X\|_F/\| M\|_F $, and (right) the average relative error for Structured sIRLS $ \|M - \hat X\|_F/\| M\|_F $. The lower plots display (left) the average ratio $ \|M - \hat X\|_F / \|M - \bar X\|_F $, and (right) the average ratio when the sampling rate of non-zero entries is at most 0.90 (a zoomed in version of part of the lower left plot)
Figure 7.  We consider twenty $ 100 \times 100 $ random matrices of rank 3 with noise parameter $ \epsilon = 10^{-4} $. The upper plots display (left) the average relative error for sIRLS $ \|B - \tilde X\|_F/\| B\|_F $, and (right) the average relative error for Structured sIRLS $ \|B - \hat X\|_F/\| B\|_F $. The lower plots display (left) the average ratio $ \|B - \hat X\|_F / \|B - \tilde X\|_F $, and (right) the average ratio when the sampling rate of non-zero entries is at least 0.35 (a zoomed in version of part of the lower left plot)
[1]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems and Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[2]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems and Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[3]

Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems and Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011

[4]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[5]

Yuhong Dai, Nobuo Yamashita. Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions. Numerical Algebra, Control and Optimization, 2011, 1 (1) : 61-69. doi: 10.3934/naco.2011.1.61

[6]

Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete and Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103

[7]

Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems and Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379

[8]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems and Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[9]

Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems and Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032

[10]

Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems and Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357

[11]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control and Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[12]

Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems and Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305

[13]

Wei Wan, Weihong Guo, Jun Liu, Haiyang Huang. Non-local blind hyperspectral image super-resolution via 4d sparse tensor factorization and low-rank. Inverse Problems and Imaging, 2020, 14 (2) : 339-361. doi: 10.3934/ipi.2020015

[14]

Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001

[15]

Huiyuan Guo, Quan Yu, Xinzhen Zhang, Lulu Cheng. Low rank matrix minimization with a truncated difference of nuclear norm and Frobenius norm regularization. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022045

[16]

Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems and Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001

[17]

Simon Arridge, Pascal Fernsel, Andreas Hauptmann. Joint reconstruction and low-rank decomposition for dynamic inverse problems. Inverse Problems and Imaging, 2022, 16 (3) : 483-523. doi: 10.3934/ipi.2021059

[18]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control and Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[19]

Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems and Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025

[20]

Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072

 Impact Factor: 

Metrics

  • PDF downloads (135)
  • HTML views (205)
  • Cited by (0)

Other articles
by authors

[Back to Top]