• Previous Article
    An application of crypto cloud computing in social networks by cooperative game theory
  • JIMO Home
  • This Issue
  • Next Article
    The space decomposition method for the sum of nonlinear convex maximum eigenvalues and its applications
July  2020, 16(4): 1907-1925. doi: 10.3934/jimo.2019035

Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm

Department of Mathematics, Harbin Institute of Technology, Harbin 150001, Heilongjiang, China

* Corresponding author: Xing Tao Wang

Received  June 2018 Revised  November 2018 Published  May 2019

In this paper, we propose two classes of the approximations to the cardinality function via the Moreau envelope of the $ \ell_{1} $ norm. We show that these two approximations are good choices of the merit function for sparsity and are essentially the truncated $ \ell_{1} $ norm and the truncated $ \ell_{2} $ norm. Moreover, we apply the approximations to solve sparse signal recovery problems and then provide new weights for reweighted $ \ell_{1} $ minimization and reweighted least squares to find sparse solutions of underdetermined linear systems of equations. Finally, we present some numerical experiments to illustrate our results.

Citation: Jun Wang, Xing Tao Wang. Sparse signal reconstruction via the approximations of $ \ell_{0} $ quasinorm. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1907-1925. doi: 10.3934/jimo.2019035
References:
[1]

Q. Berthet and P. Rigollet, Optimal detection of sparse principal components in high dimension, Annal. Statistics, 41 (2013), 1780-1815.  doi: 10.1214/13-AOS1127.  Google Scholar

[2]

Md. Z. A. BhotM. O. Ahmad and M. N. S. Swamy, An improved fast iterative shrinkage thresholding algorithm for image deblurring, SIAM J. Imaging Sci., 8 (2015), 1640-1657.  doi: 10.1137/140970537.  Google Scholar

[3]

A. M. BrucksteinD. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51 (2009), 34-81.  doi: 10.1137/060657704.  Google Scholar

[4]

E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. CandèsJ. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[6]

E. J. CandésJ. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), 1207-1223.  doi: 10.1002/cpa.20124.  Google Scholar

[7]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.  Google Scholar

[8]

E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inf. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[9]

E. J. CandèsM. Wakin and S. Boyd, Enhancing sparsity by reweighted $\ell_{1}$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.  Google Scholar

[10]

R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett., 14 (2007), 707-710.  doi: 10.1109/LSP.2007.898300.  Google Scholar

[11]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, in: Proc. Int. Conf. Accoust. Speech, Signal Process., 2008, 3869–3872. doi: 10.1109/ICASSP.2008.4518498.  Google Scholar

[12]

S. S. ChenD. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 43 (2001), 129-159.  doi: 10.1137/S003614450037906X.  Google Scholar

[13]

A. M. ChristopherM. Arian and G. B. Richard, From denoising to compressed sensing, IEEE Trans. Inf. Theory, 62 (2016), 5117-5144.  doi: 10.1109/TIT.2016.2556683.  Google Scholar

[14]

A. CohenW. Dahmen and R. DeVore, Compressed sensing and best k-term approximation, J. American Math. Soc., 22 (2009), 211-231.  doi: 10.1090/S0894-0347-08-00610-3.  Google Scholar

[15]

I. DaubechiesM. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413-1457.  doi: 10.1002/cpa.20042.  Google Scholar

[16]

D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[17]

E. EsserY. F. Lou and J. Xin, A method for finding structured sparse solutions to nonnegative least squares problems with applications, SIAM J. maging Sci., 6 (2013), 2010-2046.  doi: 10.1137/13090540X.  Google Scholar

[18]

J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360.  doi: 10.1198/016214501753382273.  Google Scholar

[19]

M. FornasierS. PeterH. Rauhut and S. Worm, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Comput. Optim. Appl., 65 (2016), 205-259.  doi: 10.1007/s10589-016-9839-8.  Google Scholar

[20]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhaäuser, Basel, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[21]

S. Foucart and M. Lai, Sparsest solutions of underdetermined linear systems via $\ell_{q}$-minimization for $0 <q\leq1$, ppl. Comput. Harmon. Anal., 26 (2009), 395-407.  doi: 10.1016/j.acha.2008.09.001.  Google Scholar

[22]

I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS, A re-weighted minimum norm algorithm, IEEE Trans. Signal Process., 45 (1997), 600-616.  doi: 10.1109/78.558475.  Google Scholar

[23]

J. Huang, Y. Jiao, B. Jin, J. Liu, X. Lu and C. Yang., A unified primal dual active set algorithm for nonconvex sparse recovery, arXiv: 1310.1147. Google Scholar

[24]

J. Huang, Y. Jiao, Y. Liu and X. Lu, A constructive approach to $L_{0}$-penalized regression, J. Mach. Learn. Res., 19 (2018), Paper No. 10, 37 pp.  Google Scholar

[25]

Y. JiaoB. Jin and X. Lu, A primal dual active set with continuation algorithm for the $\ell^{0}$-regularized optimization problem, Appl. Comput. Harmon. Anal., 39 (2015), 400-426.  doi: 10.1016/j.acha.2014.10.001.  Google Scholar

[26]

R. KuengH. Rauhut and U. Terstiege., Low rank matrix recovery from rank one measurements, Appl. Comput. Harmon. Anal., 42 (2017), 88-116.  doi: 10.1016/j.acha.2015.07.007.  Google Scholar

[27]

M. Lai and J. Wang, An unconstrained $\ell_{q}$ minimization with $0 < q\leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.  Google Scholar

[28]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_{p}$ regularized unconstrained nonlinear programming, Mathematical Program., 147 (2014), 277-307.  doi: 10.1007/s10107-013-0722-4.  Google Scholar

[29]

Z. S. Lu and X. Li, Sparse recovery via partial regularization: Models, theory, and algorithms, Math. Oper. Res., 43 (2018), 1290-1316.  doi: 10.1287/moor.2017.0905.  Google Scholar

[30]

J. Lv and Y. Fan, A unified approach to model selection and sparse recovery using regularized least squares, Ann. Statist., 37 (2009), 3498-3528.  doi: 10.1214/09-AOS683.  Google Scholar

[31]

H. Mansour and R. Saab, Recovery analysis for weighted $\ell_{1}$-minimization using the null space property, Appl. Comput. Harmon. Anal., 43 (2017), 23-38.  doi: 10.1016/j.acha.2015.10.005.  Google Scholar

[32]

F. D. MarcoA. D. Mark and T. Dharmpal et al., Single-pixel imaging via compressive sampling, IEEE Trans. Signal Magazine, 25 (2008), 83-91.  doi: 10.1109/MSP.2007.914730.  Google Scholar

[33]

H. MohimaniB. Z. Massoud and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed $ \ell^{0} $ norm, IEEE Trans. Signal Process., 57 (2009), 289-301.  doi: 10.1109/TSP.2008.2007606.  Google Scholar

[34]

B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234.  doi: 10.1137/S0097539792240406.  Google Scholar

[35]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26 (2009), 301-321.  doi: 10.1016/j.acha.2008.07.002.  Google Scholar

[36]

H. L. Royden and P. M. Fitzpatrick, Real Analysis, 4$^{nd}$ edition, Macmillan Publishing Company, New York, 2010. Google Scholar

[37]

I. Selesnick, Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494.  doi: 10.1109/TSP.2017.2711501.  Google Scholar

[38]

I. Selesnick and M. Farshchian, Sparse signal approximation via nonseparable regularization, IEEE Trans. Signal Process., 65 (2017), 2561-2575.  doi: 10.1109/TSP.2017.2669904.  Google Scholar

[39]

G. W. Stewart, On scaled projections and pseudoinverses, Linear Algebra Appl., 112 (1989), 189-193.  doi: 10.1016/0024-3795(89)90594-6.  Google Scholar

[40]

B. TianX. Yang and K. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, J. Ind. Manag. Optim., 12 (2016), 949-973.  doi: 10.3934/jimo.2016.12.949.  Google Scholar

[41]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.  doi: 10.1111/j.2517-6161.1996.tb02080.x.  Google Scholar

[42]

J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53 (2007), 4655-4666.  doi: 10.1109/TIT.2007.909108.  Google Scholar

[43]

J. Wang and X. T. Wang, Sparse approximate reconstruction decomposed by two optimization problems, Circ. Syst. Signal Process., 37 (2018), 2164-2178.  doi: 10.1007/s00034-017-0667-6.  Google Scholar

[44]

D. Wipf and S. Nagarajan, Iterative reweighted $\ell_1 $ and $\ell_2 $ methods for finding sparse solutions, IEEE J. Select. Topics Signal Process., 4 (2010), 317-329.  doi: 10.1109/JSTSP.2010.2042413.  Google Scholar

[45]

J. WrightA. Y. Yang and G. Arvind et al., Robust face recognition via sparse representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 31 (2008), 210-227.  doi: 10.1109/AFGR.2008.4813404.  Google Scholar

[46]

P. Yin, Y. Lou, Q. He and J. Xin, Minimization of $\ell_{1-2}$ for compressed sensing, SIAM J. Sci. Comput., 37 (2015), A536–A563. doi: 10.1137/140952363.  Google Scholar

[47]

C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Statist. Sci., 2 (2010), 894-942.  doi: 10.1214/09-AOS729.  Google Scholar

[48]

C.-H. Zhang and T. Zhang, A general theory of concave regularization for high-dimensional sparse estimation problems, Statist. Sci., 27 (2012), 576-593.  doi: 10.1214/12-STS399.  Google Scholar

[49]

T. Zhang, Analysis of multi-stage convex relaxation for sparse regularization, J. Mach. Learn. Res., 11 (2010), 1081-1107.   Google Scholar

[50]

Y.-B. Zhao, RSP-based analysis for sparsest and least $\ell_{1}$-norm solutions to underdetermined linear systems, IEEE Trans. Signal Process., 61 (2013), 5777-5788.  doi: 10.1109/TSP.2013.2281030.  Google Scholar

[51]

Y.-B. Zhao and M. Kocvara, A new computational method for the sparsest solutions to systems of linear equations, SIAM J. Optim., 25 (2015), 1110-1134.  doi: 10.1137/140968240.  Google Scholar

[52]

Y. Zhao and D. Li, Reweighted $\ell_{1}$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.  Google Scholar

show all references

References:
[1]

Q. Berthet and P. Rigollet, Optimal detection of sparse principal components in high dimension, Annal. Statistics, 41 (2013), 1780-1815.  doi: 10.1214/13-AOS1127.  Google Scholar

[2]

Md. Z. A. BhotM. O. Ahmad and M. N. S. Swamy, An improved fast iterative shrinkage thresholding algorithm for image deblurring, SIAM J. Imaging Sci., 8 (2015), 1640-1657.  doi: 10.1137/140970537.  Google Scholar

[3]

A. M. BrucksteinD. L. Donoho and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51 (2009), 34-81.  doi: 10.1137/060657704.  Google Scholar

[4]

E.J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. CandèsJ. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[6]

E. J. CandésJ. Romberg and T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math., 59 (2006), 1207-1223.  doi: 10.1002/cpa.20124.  Google Scholar

[7]

E. J. Candès and T. Tao, Decoding by linear programming, IEEE Trans. Inf. Theory, 51 (2005), 4203-4215.  doi: 10.1109/TIT.2005.858979.  Google Scholar

[8]

E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inf. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[9]

E. J. CandèsM. Wakin and S. Boyd, Enhancing sparsity by reweighted $\ell_{1}$ minimization, J. Fourier Anal. Appl., 14 (2008), 877-905.  doi: 10.1007/s00041-008-9045-x.  Google Scholar

[10]

R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Process. Lett., 14 (2007), 707-710.  doi: 10.1109/LSP.2007.898300.  Google Scholar

[11]

R. Chartrand and W. Yin, Iteratively reweighted algorithms for compressive sensing, in: Proc. Int. Conf. Accoust. Speech, Signal Process., 2008, 3869–3872. doi: 10.1109/ICASSP.2008.4518498.  Google Scholar

[12]

S. S. ChenD. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., 43 (2001), 129-159.  doi: 10.1137/S003614450037906X.  Google Scholar

[13]

A. M. ChristopherM. Arian and G. B. Richard, From denoising to compressed sensing, IEEE Trans. Inf. Theory, 62 (2016), 5117-5144.  doi: 10.1109/TIT.2016.2556683.  Google Scholar

[14]

A. CohenW. Dahmen and R. DeVore, Compressed sensing and best k-term approximation, J. American Math. Soc., 22 (2009), 211-231.  doi: 10.1090/S0894-0347-08-00610-3.  Google Scholar

[15]

I. DaubechiesM. Defrise and M. C. De, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pur. Appl. Math., 57 (2004), 1413-1457.  doi: 10.1002/cpa.20042.  Google Scholar

[16]

D. L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[17]

E. EsserY. F. Lou and J. Xin, A method for finding structured sparse solutions to nonnegative least squares problems with applications, SIAM J. maging Sci., 6 (2013), 2010-2046.  doi: 10.1137/13090540X.  Google Scholar

[18]

J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Amer. Statist. Assoc., 96 (2001), 1348-1360.  doi: 10.1198/016214501753382273.  Google Scholar

[19]

M. FornasierS. PeterH. Rauhut and S. Worm, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Comput. Optim. Appl., 65 (2016), 205-259.  doi: 10.1007/s10589-016-9839-8.  Google Scholar

[20]

S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing, Birkhaäuser, Basel, 2013. doi: 10.1007/978-0-8176-4948-7.  Google Scholar

[21]

S. Foucart and M. Lai, Sparsest solutions of underdetermined linear systems via $\ell_{q}$-minimization for $0 <q\leq1$, ppl. Comput. Harmon. Anal., 26 (2009), 395-407.  doi: 10.1016/j.acha.2008.09.001.  Google Scholar

[22]

I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS, A re-weighted minimum norm algorithm, IEEE Trans. Signal Process., 45 (1997), 600-616.  doi: 10.1109/78.558475.  Google Scholar

[23]

J. Huang, Y. Jiao, B. Jin, J. Liu, X. Lu and C. Yang., A unified primal dual active set algorithm for nonconvex sparse recovery, arXiv: 1310.1147. Google Scholar

[24]

J. Huang, Y. Jiao, Y. Liu and X. Lu, A constructive approach to $L_{0}$-penalized regression, J. Mach. Learn. Res., 19 (2018), Paper No. 10, 37 pp.  Google Scholar

[25]

Y. JiaoB. Jin and X. Lu, A primal dual active set with continuation algorithm for the $\ell^{0}$-regularized optimization problem, Appl. Comput. Harmon. Anal., 39 (2015), 400-426.  doi: 10.1016/j.acha.2014.10.001.  Google Scholar

[26]

R. KuengH. Rauhut and U. Terstiege., Low rank matrix recovery from rank one measurements, Appl. Comput. Harmon. Anal., 42 (2017), 88-116.  doi: 10.1016/j.acha.2015.07.007.  Google Scholar

[27]

M. Lai and J. Wang, An unconstrained $\ell_{q}$ minimization with $0 < q\leq 1$ for sparse solution of underdetermined linear systems, SIAM J. Optim., 21 (2011), 82-101.  doi: 10.1137/090775397.  Google Scholar

[28]

Z. S. Lu, Iterative reweighted minimization methods for $\ell_{p}$ regularized unconstrained nonlinear programming, Mathematical Program., 147 (2014), 277-307.  doi: 10.1007/s10107-013-0722-4.  Google Scholar

[29]

Z. S. Lu and X. Li, Sparse recovery via partial regularization: Models, theory, and algorithms, Math. Oper. Res., 43 (2018), 1290-1316.  doi: 10.1287/moor.2017.0905.  Google Scholar

[30]

J. Lv and Y. Fan, A unified approach to model selection and sparse recovery using regularized least squares, Ann. Statist., 37 (2009), 3498-3528.  doi: 10.1214/09-AOS683.  Google Scholar

[31]

H. Mansour and R. Saab, Recovery analysis for weighted $\ell_{1}$-minimization using the null space property, Appl. Comput. Harmon. Anal., 43 (2017), 23-38.  doi: 10.1016/j.acha.2015.10.005.  Google Scholar

[32]

F. D. MarcoA. D. Mark and T. Dharmpal et al., Single-pixel imaging via compressive sampling, IEEE Trans. Signal Magazine, 25 (2008), 83-91.  doi: 10.1109/MSP.2007.914730.  Google Scholar

[33]

H. MohimaniB. Z. Massoud and C. Jutten, A fast approach for overcomplete sparse decomposition based on smoothed $ \ell^{0} $ norm, IEEE Trans. Signal Process., 57 (2009), 289-301.  doi: 10.1109/TSP.2008.2007606.  Google Scholar

[34]

B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM J. Comput., 24 (1995), 227-234.  doi: 10.1137/S0097539792240406.  Google Scholar

[35]

D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26 (2009), 301-321.  doi: 10.1016/j.acha.2008.07.002.  Google Scholar

[36]

H. L. Royden and P. M. Fitzpatrick, Real Analysis, 4$^{nd}$ edition, Macmillan Publishing Company, New York, 2010. Google Scholar

[37]

I. Selesnick, Sparse regularization via convex analysis, IEEE Trans. Signal Process., 65 (2017), 4481-4494.  doi: 10.1109/TSP.2017.2711501.  Google Scholar

[38]

I. Selesnick and M. Farshchian, Sparse signal approximation via nonseparable regularization, IEEE Trans. Signal Process., 65 (2017), 2561-2575.  doi: 10.1109/TSP.2017.2669904.  Google Scholar

[39]

G. W. Stewart, On scaled projections and pseudoinverses, Linear Algebra Appl., 112 (1989), 189-193.  doi: 10.1016/0024-3795(89)90594-6.  Google Scholar

[40]

B. TianX. Yang and K. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, J. Ind. Manag. Optim., 12 (2016), 949-973.  doi: 10.3934/jimo.2016.12.949.  Google Scholar

[41]

R. Tibshirani, Regression shrinkage and selection via the lasso, J. Roy. Statist. Soc. Ser. B, 58 (1996), 267-288.  doi: 10.1111/j.2517-6161.1996.tb02080.x.  Google Scholar

[42]

J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory, 53 (2007), 4655-4666.  doi: 10.1109/TIT.2007.909108.  Google Scholar

[43]

J. Wang and X. T. Wang, Sparse approximate reconstruction decomposed by two optimization problems, Circ. Syst. Signal Process., 37 (2018), 2164-2178.  doi: 10.1007/s00034-017-0667-6.  Google Scholar

[44]

D. Wipf and S. Nagarajan, Iterative reweighted $\ell_1 $ and $\ell_2 $ methods for finding sparse solutions, IEEE J. Select. Topics Signal Process., 4 (2010), 317-329.  doi: 10.1109/JSTSP.2010.2042413.  Google Scholar

[45]

J. WrightA. Y. Yang and G. Arvind et al., Robust face recognition via sparse representation, IEEE Trans. Pattern Analysis and Machine Intelligence, 31 (2008), 210-227.  doi: 10.1109/AFGR.2008.4813404.  Google Scholar

[46]

P. Yin, Y. Lou, Q. He and J. Xin, Minimization of $\ell_{1-2}$ for compressed sensing, SIAM J. Sci. Comput., 37 (2015), A536–A563. doi: 10.1137/140952363.  Google Scholar

[47]

C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, Statist. Sci., 2 (2010), 894-942.  doi: 10.1214/09-AOS729.  Google Scholar

[48]

C.-H. Zhang and T. Zhang, A general theory of concave regularization for high-dimensional sparse estimation problems, Statist. Sci., 27 (2012), 576-593.  doi: 10.1214/12-STS399.  Google Scholar

[49]

T. Zhang, Analysis of multi-stage convex relaxation for sparse regularization, J. Mach. Learn. Res., 11 (2010), 1081-1107.   Google Scholar

[50]

Y.-B. Zhao, RSP-based analysis for sparsest and least $\ell_{1}$-norm solutions to underdetermined linear systems, IEEE Trans. Signal Process., 61 (2013), 5777-5788.  doi: 10.1109/TSP.2013.2281030.  Google Scholar

[51]

Y.-B. Zhao and M. Kocvara, A new computational method for the sparsest solutions to systems of linear equations, SIAM J. Optim., 25 (2015), 1110-1134.  doi: 10.1137/140968240.  Google Scholar

[52]

Y. Zhao and D. Li, Reweighted $\ell_{1}$-minimization for sparse solutions to underdetermined linear systems, SIAM J. Optim., 22 (2012), 1065-1088.  doi: 10.1137/110847445.  Google Scholar

Figure 1.  Comparison of success rates of finding the $ s $-sparse solution $ \mathbf{y} $ of $ \mathbf{b = Ax} $, where $ \mathbf{A} \in \mathbb{R}^{50\times 250} $ and $ \mathbf{y} \in \mathbb{R}^{250} $. For each $ s $-sparsity, $ 500 $ attempts were made
Figure 2.  Comparison of $ \Vert \mathbf{y}^{k}\Vert_{1} $ and weighted $ \Vert \mathbf{y}^{k}\Vert_{1} $ at each iteration $ k $ with $ \mathbf{A} \in \mathbb{R}^{50\times 250} $, $ \mathbf{y} \in \mathbb{R}^{250} $ and $ p = 0.618, s = 10 $
Figure 3.  Comparison of success rates of finding the $ s $-sparse solution $ \mathbf{y} $ of $ \mathbf{b = Ay} $, where $ \mathbf{A} \in \mathbb{R}^{50\times 250} $ and $ \mathbf{y} \in \mathbb{R}^{250} $. For each $ s $-sparsity, $ 500 $ attempts were made
Figure 4.  Comparison of $ \Vert \mathbf{y}^{k}\Vert_{2}^{2} $ and weighted $ \Vert \mathbf{y}^{k}\Vert_{2}^{2} $ at each iteration $ k $ with $ \mathbf{A} \in \mathbb{R}^{50\times 250} $, $ \mathbf{y} \in \mathbb{R}^{250} $ and $ p = 0.618, s = 10 $
[1]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[2]

Sara Munday. On the derivative of the $\alpha$-Farey-Minkowski function. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 709-732. doi: 10.3934/dcds.2014.34.709

[3]

Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2537-2559. doi: 10.3934/dcdsb.2020194

[4]

Zhihua Zhang, Naoki Saito. PHLST with adaptive tiling and its application to antarctic remote sensing image approximation. Inverse Problems & Imaging, 2014, 8 (1) : 321-337. doi: 10.3934/ipi.2014.8.321

[5]

Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399

[6]

Xianming Liu, Guangyue Han. A Wong-Zakai approximation of stochastic differential equations driven by a general semimartingale. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2499-2508. doi: 10.3934/dcdsb.2020192

[7]

Ralf Hielscher, Michael Quellmalz. Reconstructing a function on the sphere from its means along vertical slices. Inverse Problems & Imaging, 2016, 10 (3) : 711-739. doi: 10.3934/ipi.2016018

[8]

Nikolaz Gourmelon. Generation of homoclinic tangencies by $C^1$-perturbations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 1-42. doi: 10.3934/dcds.2010.26.1

[9]

Braxton Osting, Jérôme Darbon, Stanley Osher. Statistical ranking using the $l^{1}$-norm on graphs. Inverse Problems & Imaging, 2013, 7 (3) : 907-926. doi: 10.3934/ipi.2013.7.907

[10]

Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier. Controllability of a 1-D tank containing a fluid modeled by a Boussinesq system. Evolution Equations & Control Theory, 2013, 2 (2) : 379-402. doi: 10.3934/eect.2013.2.379

[11]

Bernold Fiedler, Carlos Rocha, Matthias Wolfrum. Sturm global attractors for $S^1$-equivariant parabolic equations. Networks & Heterogeneous Media, 2012, 7 (4) : 617-659. doi: 10.3934/nhm.2012.7.617

[12]

Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator. Conference Publications, 2013, 2013 (special) : 247-257. doi: 10.3934/proc.2013.2013.247

[13]

Dayalal Suthar, Sunil Dutt Purohit, Haile Habenom, Jagdev Singh. Class of integrals and applications of fractional kinetic equation with the generalized multi-index Bessel function. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021019

[14]

Saima Rashid, Fahd Jarad, Zakia Hammouch. Some new bounds analogous to generalized proportional fractional integral operator with respect to another function. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021020

[15]

Teddy Pichard. A moment closure based on a projection on the boundary of the realizability domain: 1D case. Kinetic & Related Models, 2020, 13 (6) : 1243-1280. doi: 10.3934/krm.2020045

[16]

Ravi Anand, Dibyendu Roy, Santanu Sarkar. Some results on lightweight stream ciphers Fountain v1 & Lizard. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020128

[17]

Christophe Zhang. Internal rapid stabilization of a 1-D linear transport equation with a scalar feedback. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021006

[18]

Hirofumi Notsu, Masato Kimura. Symmetry and positive definiteness of the tensor-valued spring constant derived from P1-FEM for the equations of linear elasticity. Networks & Heterogeneous Media, 2014, 9 (4) : 617-634. doi: 10.3934/nhm.2014.9.617

[19]

Jong Yoon Hyun, Yoonjin Lee, Yansheng Wu. Connection of $ p $-ary $ t $-weight linear codes to Ramanujan Cayley graphs with $ t+1 $ eigenvalues. Advances in Mathematics of Communications, 2021  doi: 10.3934/amc.2020133

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (118)
  • HTML views (716)
  • Cited by (0)

Other articles
by authors

[Back to Top]