• Previous Article
    How points-exchange incentives in a closed-loop supply chain weaken competition from the informal recycler
  • JIMO Home
  • This Issue
  • Next Article
    Optimization of traffic control in $ MMAP\mathit{[2]}/PH\mathit{[2]}/S$ priority queueing model with $ PH $ retrial times and the preemptive repeat policy
doi: 10.3934/jimo.2022101
Online First

Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.

Readers can access Online First articles via the “Online First” tab for the selected journal.

A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization

School of Applied Mathematics, Nanjing University of Finance & Economics, Nanjing 210023, China

*Corresponding author: Yannian Zuo

Received  September 2021 Revised  May 2022 Early access June 2022

Fund Project: The first author is supported by National Natural Science Foundation of China [grant numbers 20BGL028, 19AZD018, 19BGL205, 17BTQ063] and by Social Science Foundation of Jiangsu province [grant numbers 17ZTB011]

Dual ascent method (DAM) is an effective method for solving linearly constrained convex optimization problems. Classical DAM converges extremely slowly due to its small stepsize, and it has been improved through relaxing the stepsize condition and introducing the self-adaptive stepsize rule by He et al, which increases its convergence speed. In this paper, we further relax its stepsize condition whereas the convergence result can still be guaranteed, providing the objective function is quadratic. We show the encouraging performance of the new DAM with new stepsize condition via the experiments on both synthetic and real problems.

Citation: Yuan Shen, Chang Liu, Yannian Zuo, Xingying Zhang. A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization. Journal of Industrial and Management Optimization, doi: 10.3934/jimo.2022101
References:
[1]

K. J. Arrow, L. Hurwicz and H. Uzawa, Studies in Linear and Non-Linear Programming, Stanford Mathematical Studies in the Social Sciences, Ⅱ, Stanford University Press, Stanford, Calif., 1958.

[2]

S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Tren. Mach. Learn., 3 (2011), 1-122.  doi: 10.1561/2200000016.

[3]

D. P. BertsekasP. A. Hosein and P. Tseng, Relaxation methods for network flow problems with convex arc costs, SIAM J. Control Optim., 25 (1987), 1219-1243.  doi: 10.1137/0325067.

[4]

L. M. Brègman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, U.S.S.R. Comput. Math. Math. Phys., 7 (1967), 200-217.  doi: 10.1016/0041-5553(67)90040-7.

[5]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[6]

J.-F. CaiS. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing, Math. Comp., 78 (2009), 1515-1536.  doi: 10.1090/S0025-5718-08-02189-3.

[7]

J.-F. CaiS. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring, SIAM J. Imaging Sci., 2 (2009), 226-252.  doi: 10.1137/080733371.

[8]

R. W. CottleS. G. Duvall and K. Zikan, A Lagrangian relaxation algorithm for the constrained matrix problem, Naval Res. Logist. Quart., 33 (1986), 55-76.  doi: 10.1002/nav.3800330106.

[9]

Y.-H. DaiD. HanX. Yuan and W. Zhang, A sequential updating scheme of the Lagrange multiplier for separable convex programming, Math. Comp., 86 (2017), 315-343.  doi: 10.1090/mcom/3104.

[10]

T. Goldstein and S. Osher, The split Bregman method for $ \text{L}_1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.

[11]

A. A. Goldstein, Convex programming in Hilbert space, Bull. Amer. Math. Soc., 70 (1964), 709-710.  doi: 10.1090/S0002-9904-1964-11178-2.

[12]

B. S. He, Self-adaptive dual ascent method for linearly consigned convex optimization. Available from: http://math.nju.edu.cn/hebma/.

[13]

E. S. Levitin and B. T. Polyak, Constrained minimization methods, U.S.S.R. Comput. Math. Math. Phys., 6 (1966), 1-50.  doi: 10.1016/0041-5553(66)90114-5.

[14]

Z.-Q. Luo and P. Tseng, On the convergence rate of dual ascent methods for linearly constrained convex minimization, Math. Oper. Res., 18 (1993), 846-867.  doi: 10.1287/moor.18.4.846.

[15]

S. OsherY. MaoB. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising, Commun. Math. Sci., 8 (2010), 93-111.  doi: 10.4310/CMS.2010.v8.n1.a6.

[16]

A. Ohuchi and I. Kaji, Lagrangian dual coordinatewise maximization algorithm for network transportation problems with quadratic costs, Networks, 14 (2010), 515-530.  doi: 10.1002/net.3230140404.

[17]

M. Tao and X. Yuan, Accelerated Uzawa methods for convex optimization, Math. Comp., 86 (2017), 1821-1845.  doi: 10.1090/mcom/3145.

[18]

P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, SIAM J. Optim.. Available from: https://www.csie.ntu.edu.tw/~b97058/tseng/papers/apgm.pdf.

[19]

J. A. Ventura, Computational development of a Lagrangian dual approach for quadratic networks, Networks, 21 (1991), 469-485.  doi: 10.1002/net.3230210407.

[20]

S. A. Zenios and J. M. Mulvey, Relaxation techniques for strictly convex network problems, Ann. Oper. Res., 5 (1986), 517-538.  doi: 10.1007/BF02022089.

[21]

X. Y. Zhang, Dual Ascent Method for Some Convex Optimization Problems, Master's Thesis, Nanjing University, 2012.

show all references

References:
[1]

K. J. Arrow, L. Hurwicz and H. Uzawa, Studies in Linear and Non-Linear Programming, Stanford Mathematical Studies in the Social Sciences, Ⅱ, Stanford University Press, Stanford, Calif., 1958.

[2]

S. BoydN. ParikhE. ChuB. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Tren. Mach. Learn., 3 (2011), 1-122.  doi: 10.1561/2200000016.

[3]

D. P. BertsekasP. A. Hosein and P. Tseng, Relaxation methods for network flow problems with convex arc costs, SIAM J. Control Optim., 25 (1987), 1219-1243.  doi: 10.1137/0325067.

[4]

L. M. Brègman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, U.S.S.R. Comput. Math. Math. Phys., 7 (1967), 200-217.  doi: 10.1016/0041-5553(67)90040-7.

[5]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[6]

J.-F. CaiS. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing, Math. Comp., 78 (2009), 1515-1536.  doi: 10.1090/S0025-5718-08-02189-3.

[7]

J.-F. CaiS. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring, SIAM J. Imaging Sci., 2 (2009), 226-252.  doi: 10.1137/080733371.

[8]

R. W. CottleS. G. Duvall and K. Zikan, A Lagrangian relaxation algorithm for the constrained matrix problem, Naval Res. Logist. Quart., 33 (1986), 55-76.  doi: 10.1002/nav.3800330106.

[9]

Y.-H. DaiD. HanX. Yuan and W. Zhang, A sequential updating scheme of the Lagrange multiplier for separable convex programming, Math. Comp., 86 (2017), 315-343.  doi: 10.1090/mcom/3104.

[10]

T. Goldstein and S. Osher, The split Bregman method for $ \text{L}_1$-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343.  doi: 10.1137/080725891.

[11]

A. A. Goldstein, Convex programming in Hilbert space, Bull. Amer. Math. Soc., 70 (1964), 709-710.  doi: 10.1090/S0002-9904-1964-11178-2.

[12]

B. S. He, Self-adaptive dual ascent method for linearly consigned convex optimization. Available from: http://math.nju.edu.cn/hebma/.

[13]

E. S. Levitin and B. T. Polyak, Constrained minimization methods, U.S.S.R. Comput. Math. Math. Phys., 6 (1966), 1-50.  doi: 10.1016/0041-5553(66)90114-5.

[14]

Z.-Q. Luo and P. Tseng, On the convergence rate of dual ascent methods for linearly constrained convex minimization, Math. Oper. Res., 18 (1993), 846-867.  doi: 10.1287/moor.18.4.846.

[15]

S. OsherY. MaoB. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising, Commun. Math. Sci., 8 (2010), 93-111.  doi: 10.4310/CMS.2010.v8.n1.a6.

[16]

A. Ohuchi and I. Kaji, Lagrangian dual coordinatewise maximization algorithm for network transportation problems with quadratic costs, Networks, 14 (2010), 515-530.  doi: 10.1002/net.3230140404.

[17]

M. Tao and X. Yuan, Accelerated Uzawa methods for convex optimization, Math. Comp., 86 (2017), 1821-1845.  doi: 10.1090/mcom/3145.

[18]

P. Tseng, On accelerated proximal gradient methods for convex-concave optimization, SIAM J. Optim.. Available from: https://www.csie.ntu.edu.tw/~b97058/tseng/papers/apgm.pdf.

[19]

J. A. Ventura, Computational development of a Lagrangian dual approach for quadratic networks, Networks, 21 (1991), 469-485.  doi: 10.1002/net.3230210407.

[20]

S. A. Zenios and J. M. Mulvey, Relaxation techniques for strictly convex network problems, Ann. Oper. Res., 5 (1986), 517-538.  doi: 10.1007/BF02022089.

[21]

X. Y. Zhang, Dual Ascent Method for Some Convex Optimization Problems, Master's Thesis, Nanjing University, 2012.

Figure 1.  Iteration $ vs $ beta
Figure 2.  Iteration $ vs $ $ n $: when $ n\in[500,3000] $, $ m = 10 $(left), $ 20 $(middle) and $ 50 $(right)
Table 1.  Computation time with $ \varepsilon = 10^{-4} $
$ n $ NDAM MDAM IUZAWA $ \frac{\rm MDAM}{\rm NDAM} $ $ \frac{\rm MDAM}{\rm IUZAWA} $
100 0.019 0.014 0.025 73.68% 56.00%
200 0.072 0.042 0.091 58.33% 46.15%
500 0.647 0.360 0.810 55.64% 44.44%
800 2.278 1.281 2.916 56.23% 43.93%
1000 4.847 2.962 6.194 61.11% 47.82%
1500 17.420 10.215 22.498 58.64% 45.40%
2000 59.707 35.097 81.089 58.78% 43.28%
$ n $ NDAM MDAM IUZAWA $ \frac{\rm MDAM}{\rm NDAM} $ $ \frac{\rm MDAM}{\rm IUZAWA} $
100 0.019 0.014 0.025 73.68% 56.00%
200 0.072 0.042 0.091 58.33% 46.15%
500 0.647 0.360 0.810 55.64% 44.44%
800 2.278 1.281 2.916 56.23% 43.93%
1000 4.847 2.962 6.194 61.11% 47.82%
1500 17.420 10.215 22.498 58.64% 45.40%
2000 59.707 35.097 81.089 58.78% 43.28%
Table 2.  Computing time, the iteration number and $ \beta_k $ with $ (m,n_i) $
$ (m,n_i) $ NDAM MDAM
Iter CPU $ \beta_k $ Iter CPU $ \beta_k $
(10,100) 217.0 0.009 1.04e-3 91.0 0.005 1.36e-3
(10,200) 204.2 0.018 3.52e-4 94.3 0.008 4.23e-4
(10,500) 112.7 0.064 2.05e-4 50.9 0.027 2.64e-4
(10,800) 140.8 0.274 8.98e-5 60.0 0.107 1.15e-4
(10, 1000) 125.1 0.402 5.95e-5 67.7 0.194 8.91e-5
(10, 1500) 286.9 2.199 3.45e-5 82.3 0.567 4.51e-5
(10, 2000) 125.4 1.638 2.75e-5 75.0 0.892 3.69e-5
(10, 2500) 157.4 3.051 3.11e-5 74.7 1.357 3.85e-5
(10, 3000) 200.3 5.347 2.02e-5 72.4 1.796 2.41e-5
(10, 4000) 229.3 11.560 1.26e-5 73.6 3.451 1.57e-5
(10, 5000) 155.8 11.202 1.03e-5 62.4 4.438 1.24e-5
$ (m,n_i) $ IUZAWA CPPA ALM
Iter CPU Iter CPU Iter CPU
(10,100) 238.2 0.034 724.4 0.018 49.5 0.003
(10,200) 271.3 0.092 481.7 0.028 31.9 0.003
(10,500) 131.9 0.227 331.9 0.141 27.1 0.013
(10,800) 160.6 1.254 278.5 0.422 35.4 0.053
(10, 1000) 178.6 2.252 284.4 0.840 39.3 0.094
(10, 1500) 300.7 8.410 224.3 3.048 44.2 0.262
(10, 2000) 160.6 7.979 231.0 6.492 42.9 0.432
(10, 2500) 213.7 16.895 250.7 11.667 44.6 0.667
(10, 3000) 193.7 18.669 254.7 7.367 41.7 0.908
(10, 4000) 271.8 43.236 233.2 14.318 43.7 1.801
(10, 5000) 194.7 41.244 359.7 25.551 45.1 2.821
$ (m,n_i) $ NDAM MDAM
Iter CPU $ \beta_k $ Iter CPU $ \beta_k $
(10,100) 217.0 0.009 1.04e-3 91.0 0.005 1.36e-3
(10,200) 204.2 0.018 3.52e-4 94.3 0.008 4.23e-4
(10,500) 112.7 0.064 2.05e-4 50.9 0.027 2.64e-4
(10,800) 140.8 0.274 8.98e-5 60.0 0.107 1.15e-4
(10, 1000) 125.1 0.402 5.95e-5 67.7 0.194 8.91e-5
(10, 1500) 286.9 2.199 3.45e-5 82.3 0.567 4.51e-5
(10, 2000) 125.4 1.638 2.75e-5 75.0 0.892 3.69e-5
(10, 2500) 157.4 3.051 3.11e-5 74.7 1.357 3.85e-5
(10, 3000) 200.3 5.347 2.02e-5 72.4 1.796 2.41e-5
(10, 4000) 229.3 11.560 1.26e-5 73.6 3.451 1.57e-5
(10, 5000) 155.8 11.202 1.03e-5 62.4 4.438 1.24e-5
$ (m,n_i) $ IUZAWA CPPA ALM
Iter CPU Iter CPU Iter CPU
(10,100) 238.2 0.034 724.4 0.018 49.5 0.003
(10,200) 271.3 0.092 481.7 0.028 31.9 0.003
(10,500) 131.9 0.227 331.9 0.141 27.1 0.013
(10,800) 160.6 1.254 278.5 0.422 35.4 0.053
(10, 1000) 178.6 2.252 284.4 0.840 39.3 0.094
(10, 1500) 300.7 8.410 224.3 3.048 44.2 0.262
(10, 2000) 160.6 7.979 231.0 6.492 42.9 0.432
(10, 2500) 213.7 16.895 250.7 11.667 44.6 0.667
(10, 3000) 193.7 18.669 254.7 7.367 41.7 0.908
(10, 4000) 271.8 43.236 233.2 14.318 43.7 1.801
(10, 5000) 194.7 41.244 359.7 25.551 45.1 2.821
[1]

Francis Akutsah, Akindele Adebayo Mebawondu, Hammed Anuoluwapo Abass, Ojen Kumar Narain. A self adaptive method for solving a class of bilevel variational inequalities with split variational inequality and composed fixed point problem constraints in Hilbert spaces. Numerical Algebra, Control and Optimization, 2021  doi: 10.3934/naco.2021046

[2]

Gang Qian, Deren Han, Lingling Xu, Hai Yang. Solving nonadditive traffic assignment problems: A self-adaptive projection-auxiliary problem method for variational inequalities. Journal of Industrial and Management Optimization, 2013, 9 (1) : 255-274. doi: 10.3934/jimo.2013.9.255

[3]

Hatim Tayeq, Amal Bergam, Anouar El Harrak, Kenza Khomsi. Self-adaptive algorithm based on a posteriori analysis of the error applied to air quality forecasting using the finite volume method. Discrete and Continuous Dynamical Systems - S, 2021, 14 (7) : 2557-2570. doi: 10.3934/dcdss.2020400

[4]

Yuan Shen, Wenxing Zhang, Bingsheng He. Relaxed augmented Lagrangian-based proximal point algorithms for convex optimization with linear constraints. Journal of Industrial and Management Optimization, 2014, 10 (3) : 743-759. doi: 10.3934/jimo.2014.10.743

[5]

Zhao-Han Sheng, Tingwen Huang, Jian-Guo Du, Qiang Mei, Hui Huang. Study on self-adaptive proportional control method for a class of output models. Discrete and Continuous Dynamical Systems - B, 2009, 11 (2) : 459-477. doi: 10.3934/dcdsb.2009.11.459

[6]

Lei Guo, Gao-Xi Li, Xinmin Yang. Global convergence of augmented Lagrangian method applied to mathematical program with switching constraints. Journal of Industrial and Management Optimization, 2022  doi: 10.3934/jimo.2022114

[7]

Chunlin Wu, Juyong Zhang, Xue-Cheng Tai. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity. Inverse Problems and Imaging, 2011, 5 (1) : 237-261. doi: 10.3934/ipi.2011.5.237

[8]

Ya-Zheng Dang, Zhong-Hui Xue, Yan Gao, Jun-Xiang Li. Fast self-adaptive regularization iterative algorithm for solving split feasibility problem. Journal of Industrial and Management Optimization, 2020, 16 (4) : 1555-1569. doi: 10.3934/jimo.2019017

[9]

Xueyong Wang, Yiju Wang, Gang Wang. An accelerated augmented Lagrangian method for multi-criteria optimization problem. Journal of Industrial and Management Optimization, 2020, 16 (1) : 1-9. doi: 10.3934/jimo.2018136

[10]

Xiaoling Sun, Xiaojin Zheng, Juan Sun. A Lagrangian dual and surrogate method for multi-dimensional quadratic knapsack problems. Journal of Industrial and Management Optimization, 2009, 5 (1) : 47-60. doi: 10.3934/jimo.2009.5.47

[11]

Xiantao Xiao, Liwei Zhang, Jianzhong Zhang. On convergence of augmented Lagrangian method for inverse semi-definite quadratic programming problems. Journal of Industrial and Management Optimization, 2009, 5 (2) : 319-339. doi: 10.3934/jimo.2009.5.319

[12]

Shuang Chen, Li-Ping Pang, Dan Li. An inexact semismooth Newton method for variational inequality with symmetric cone constraints. Journal of Industrial and Management Optimization, 2015, 11 (3) : 733-746. doi: 10.3934/jimo.2015.11.733

[13]

Xi-Hong Yan. A new convergence proof of augmented Lagrangian-based method with full Jacobian decomposition for structured variational inequalities. Numerical Algebra, Control and Optimization, 2016, 6 (1) : 45-54. doi: 10.3934/naco.2016.6.45

[14]

Anis Theljani, Ke Chen. An augmented lagrangian method for solving a new variational model based on gradients similarity measures and high order regulariation for multimodality registration. Inverse Problems and Imaging, 2019, 13 (2) : 309-335. doi: 10.3934/ipi.2019016

[15]

Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial and Management Optimization, 2022, 18 (2) : 773-794. doi: 10.3934/jimo.2020178

[16]

Chunrong Chen, T. C. Edwin Cheng, Shengji Li, Xiaoqi Yang. Nonlinear augmented Lagrangian for nonconvex multiobjective optimization. Journal of Industrial and Management Optimization, 2011, 7 (1) : 157-174. doi: 10.3934/jimo.2011.7.157

[17]

Rong Hu, Ya-Ping Fang, Nan-Jing Huang. Levitin-Polyak well-posedness for variational inequalities and for optimization problems with variational inequality constraints. Journal of Industrial and Management Optimization, 2010, 6 (3) : 465-481. doi: 10.3934/jimo.2010.6.465

[18]

Timilehin Opeyemi Alakoya, Lateef Olakunle Jolaoso, Oluwatosin Temitope Mewomo. A self adaptive inertial algorithm for solving split variational inclusion and fixed point problems with applications. Journal of Industrial and Management Optimization, 2022, 18 (1) : 239-265. doi: 10.3934/jimo.2020152

[19]

Saeid Ansary Karbasy, Maziar Salahi. Quadratic optimization with two ball constraints. Numerical Algebra, Control and Optimization, 2020, 10 (2) : 165-175. doi: 10.3934/naco.2019046

[20]

Sergio Grillo, Marcela Zuccalli. Variational reduction of Lagrangian systems with general constraints. Journal of Geometric Mechanics, 2012, 4 (1) : 49-88. doi: 10.3934/jgm.2012.4.49

2021 Impact Factor: 1.411

Metrics

  • PDF downloads (80)
  • HTML views (35)
  • Cited by (0)

Other articles
by authors

[Back to Top]