• PDF
• Cite
• Share
Article Contents  Article Contents

# An efficient algorithm for non-convex sparse optimization

• * Corresponding author: Wanquan Liu
• It is a popular research topic in computer vision community to find a solution for the zero norm minimization problem via solving its non-convex relaxation problem. In fact, there are already many existing algorithms to solve the non-convex relaxation problem. However, most of them are computationally expensive due to the non-Lipschitz property of this problem and thus these existing algorithms are not suitable for many engineering problems with large dimensions.

In this paper, we first develop an efficient algorithm to solve the non-convex relaxation problem via solving a sequence of non-convex sub-problems based on our recent work. To this end, we reformulate the minimization problem into another non-convex one but with non-negative constraint. Then we can transform the non-Lipschitz continuous non-convex problem with the non-negative constraint into a Lipschitz continuous problem, which allows us to use some efficient existing algorithms for its solution. Based on the proposed algorithm, an important relation between the solutions of relaxation problem and the original zero norm minimization problem is established from a different point of view. The results in this paper reveal two important issues: ⅰ) The solution of non-convex relaxation minimization problem converges to the solution of the original problem; ⅱ) The general non-convex relaxation problem can be solved efficiently with another reformulated high dimension problem with nonnegative constraint. Finally, some numerical results are used to demonstrate effectiveness of the proposed algorithm.

Mathematics Subject Classification: Primary: 65L09, 90C30, 65H10, 68W25, 68W01.

 Citation: • • Figure 1.  The original signal and the reconstructed signals with $p = 0.8,0.6,0.4,0.2$, respectively

Figure 2.  The original and recovered images with different values of $p$, where SNR denotes the signal-to-noise ratio

Figure 3.  Comparison of the performance for ITM and SMM with large-scale problems. The first line: $p = 0.9$; the second line: $p = 0.7$; the final line: $p = 0.5$. The first column: sparsity; the second column: relative error; the final column: cpu time

• ## Article Metrics  DownLoad:  Full-Size Img  PowerPoint