Article Contents
Article Contents

# On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems

• In this paper, we consider the problem of minimizing a nonsmooth convex objective which is the sum of a proper, nonsmooth, closed, strongly convex extend real-valued function with a proper, nonsmooth, closed, convex extend real-valued function which is a composition of a proper closed convex function and a nonzero affine map. We first establish its dual problem which consists of minimizing the sum of a smooth convex function with a closed proper nonsmooth convex function. Then we apply first order proximal gradient methods on the dual problem, where an error is present in the calculation of the gradient of the smooth term. Further we present a dual proximal-gradient methods with approximate gradient. We show that when the errors are summable although the dual lowest objective function sequence generated by the proximal-gradient method with the errors converges to the optimal value with the rate $O(\frac{1}{k})$, the rate of convergence of the primal sequence is of the order $O(\frac{1}{\sqrt{k}}$).
Mathematics Subject Classification: Primary: 90C25, 65K05; Secondary: 90C59, 49M29.

 Citation:

•  [1] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2 (2009), 183-202.doi: 10.1137/080716542. [2] A. Beck, M. Teboulle, A Fast Dual Proximal Gradient Algorithm for Convex Minimization and Applications, Operations Research Letters, 42 (2014), 1-6.doi: 10.1016/j.orl.2013.10.007. [3] D. P. Bertsekas, Constrained Optimization and Lagrangian Multipliers, New York, Academic Press, 1982. [4] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2011), 1-122.doi: 10.1561/2200000016. [5] P. L. Combettes and J. C. Presquet, Proximal splitting methods in signal processing, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering (eds. H. H. Bauschke, R. S. Burachik, P. L. Combettes, V. Elser, D. R. Luke, and H. Wolkowicz), Springer Verlag series in Optimization and Its Applications, 49 (2011), 185-212.doi: 10.1007/978-1-4419-9569-8_10. [6] D. Gabay, Applications of the method of multipliers to variational inequalities, in Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems, chapter IX" (eds. M. Fortin and R. Glowinski), North-Holland, Amsterdam, (1983), 299-340. [7] R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator Splitting Methods in Nonlinear Mechanics, volume 9, Society for Industrial Mathematics, 1989.doi: 10.1137/1.9781611970838. [8] P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM Journal on Numerical Analysis, 16 (1979), 964-979.doi: 10.1137/0716071. [9] J. J. Moreau., Proximité et dualité dans un espace hilbertien, Bull. Soc. Math. France, 93 (1965), 273-299. [10] Y. Nesterov, Gradient Methods for Minimizing Composite Objective Function, CORE Discussion Papers, (2007), 2007/76. [11] G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space, J. Math. Anal. Appl., 72 (1979), 383-390.doi: 10.1016/0022-247X(79)90234-8. [12] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal on Control and Optimization, 14 (1976), 877-898.doi: 10.1137/0314056. [13] R. T. Rockafellar, Convex Analysis, Princeton NJ: Princeton Univ. Press, 1970. [14] M. Schmidt, N. Le Roux, and F. Bach, Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization, NIPS'11 - 25 th Annual Conference on Neural Information Processing Systems, Dec 2011, Grenada, Spain. [15] J. Yang and Y. Zhang, Alternating direction algorithms for l1-problems in compressive sensing, SIAM journal on scientific computing, 33 (2011), 250-278.doi: 10.1137/090777761.