In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed $l_1$-regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.
Citation: |
Figure 3.
(a) Function value error versus number of cycles
[1] |
A. M. Bagirov, M. Ghosh and D. Webb, A derivative-free method for linearly constrained nonsmooth optimization, J. Ind. Manag. Optim., 2 (2006), 319-338.
doi: 10.3934/jimo.2006.2.319.![]() ![]() ![]() |
[2] |
D. P. Bertsekas, Stochastic optimization problems with nondifferentiable cost functionals, J. Optim. Theory Appl., 12 (1973), 218-231.
doi: 10.1007/BF00934819.![]() ![]() ![]() |
[3] |
D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 1989.
![]() |
[4] |
D. P. Bertsekas, A. Nedić and E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003.
![]() ![]() |
[5] |
D. P. Bertsekas, Incremental proximal methods for large scale convex optimization, Math. Program. B., 129 (2011), 163-195.
doi: 10.1007/s10107-011-0472-0.![]() ![]() ![]() |
[6] |
A. R. Conn, K. Scheinberg and L. N. Vicente, Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2009.
doi: 10.1137/1.9780898718768.![]() ![]() ![]() |
[7] |
J. C. Duchi, A. Agarwal and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Autom. Control., 57 (2012), 592-606.
doi: 10.1109/TAC.2011.2161027.![]() ![]() ![]() |
[8] |
J. C. Duchi, P. L. Bartlet and M. J. Wainwrighr, Randomized smoothing for stochastic optimization, SIAM J. Optim., 22 (2012), 674-701.
doi: 10.1137/110831659.![]() ![]() ![]() |
[9] |
X. X. Huang, X. Q. Yang and K. L. Teo, A smoothing scheme for optimization problems with Max-Min constraints, J. Ind. Manag. Optim., 3 (2007), 209-222.
doi: 10.3934/jimo.2007.3.209.![]() ![]() ![]() |
[10] |
J. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms Ⅰ, Springer, Berlin, 1996.
doi: 10.1007/978-3-662-02796-7.![]() ![]() |
[11] |
X. Zhang, C. Wu, J. Li, X. Wang, Z. Yang, J. M. Lee and K. H. Jung, Binary artificial algae algorithm for multidimensional knapsack problems, Applied Soft Computing, 43 (2016), 583-595.
doi: 10.1016/j.asoc.2016.02.027.![]() ![]() |
[12] |
B. Johansson, M. Rabi and M. Johansson, A randomized incremental subgradient method for distributed optimization in networked systems, SIAM J. Optim., 20 (2009), 1157-1170.
doi: 10.1137/08073038X.![]() ![]() ![]() |
[13] |
K. C. Kiwiel, Convergence of approximate and incremental subgradient methods for convex optimization, SIAM J. Optim., 14 (2004), 807-840.
doi: 10.1137/S1052623400376366.![]() ![]() ![]() |
[14] |
J. Y. Li, C. Z. Wu, Z. Y. Wu and Q. Long, Gradient-free method for nonsmooth distributed optimization, J. Glob. Optim., 61 (2015), 325-340.
doi: 10.1007/s10898-014-0174-2.![]() ![]() ![]() |
[15] |
J. Y. Li, C. Z. Wu, Z. Y. Wu, Q. Long and X. Y. Wang, Distributed proximal-gradient method for convex optimization with inequality constraints, ANZIAM J., 56 (2014), 160-178.
doi: 10.1017/S1446181114000273.![]() ![]() ![]() |
[16] |
A. Nedić and D. P. Bertsekas, Convergence rate of incremental subgradient algorithm, in
Stochastic Optimization: Algorithms and Applications (eds. S. Uryasev and P. M. Pardalos),
Applied Optimization, 54, Springer, 2001,223-264.
doi: 10.1007/978-1-4757-6594-6_11.![]() ![]() |
[17] |
A. Nedić and D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, SIAM J. Optim., 12 (2001), 109-138.
doi: 10.1137/S1052623499362111.![]() ![]() ![]() |
[18] |
A. Nedić and A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control., 54 (2009), 48-61.
doi: 10.1109/TAC.2008.2009515.![]() ![]() ![]() |
[19] |
Y. Nesterov,
Random Gradient-Free Minimization of Convex Functions, Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, January 2011. Available from: http://www.ecore.be/DPs/dp_1297333890.pdf.
doi: 10.1007/s10208-015-9296-2.![]() ![]() |
[20] |
B. T. Polyak and J. Tsypkin, Robust identification, Automatica, 16 (1980), 53-63.
doi: 10.1016/0005-1098(80)90086-2.![]() ![]() ![]() |
[21] |
M. G. Rabbat and R. D. Nowak, Quantized incremental algorithms for distributed optimization, IEEE J. Sel. Areas Commun., 23 (2005), 798-808.
doi: 10.1109/JSAC.2005.843546.![]() ![]() |
[22] |
S. S. Ram, A. Nedić and V. V. Veeravalli, Incremental stochastic subgradient algorithms for convex optimization, SIAM J. Optim., 20 (2009), 691-717.
doi: 10.1137/080726380.![]() ![]() ![]() |
[23] |
Q. J. Shi, C. He and L. G. Jiang, Normalized incremental subgradient algorithm and its application, IEEE Signal Processing, 57 (2009), 3759-3774.
doi: 10.1109/TSP.2009.2024901.![]() ![]() ![]() |
[24] |
R. L. Sheu, M. J. Ting and I. L. Wang, Maximum folw problem in the distribution network, J. Ind. Manag. Optim., 2 (2006), 237-254.
doi: 10.3934/jimo.2006.2.237.![]() ![]() ![]() |
[25] |
M. V. Solodov, Incremental gradient algorithms with stepsizes bounded away from zero, Comput. Optim. Appl., 11 (1998), 28-35.
doi: 10.1023/A:1018366000512.![]() ![]() ![]() |
[26] |
D. M. Yuan, S. Y. Xu and J. W. Lu, Gradient-free method for distributed multi-agent optimization via push-sum algorithms, Int. J. Robust Nonlinear Control, 25 (2015), 1569-1580.
doi: 10.1002/rnc.3164.![]() ![]() ![]() |
[27] |
Q. Long and C. Wu, A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization, J. Ind. Manag. Optim., 10 (2014), 1279-1296.
doi: 10.3934/jimo.2014.10.1279.![]() ![]() ![]() |
[28] |
G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations, J. Ind. Manag. Optim., 6 (2010), 149-160.
doi: 10.3934/jimo.2010.6.149.![]() ![]() ![]() |
[29] |
C. J. Yu, K. L. Teo, L. S. Zhang and Y. Q. Bai, A new exact penalty function method for continuous inequality constrained optimization problems, J. Ind. Manag. Optim., 6 (2010), 895-910.
doi: 10.3934/jimo.2010.6.895.![]() ![]() ![]() |
[30] |
F. Yousefian, A. Nedić and U. V. Shanbhag, On stochastic gradient and subgradient methods with adaptive steplength sequences, Automatica, 48 (2012), 56-67.
doi: 10.1016/j.automatica.2011.09.043.![]() ![]() ![]() |
[31] |
J. Li, G. Chen, Z. Dong and Z. Wu, A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints, Comp. Opt. Appl., 64 (2016), 671-697.
doi: 10.1007/s10589-016-9826-0.![]() ![]() ![]() |