April  2017, 13(2): 649-658. doi: 10.3934/jimo.2016038

A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update

1. 

Department of Mathematics, Faculty of Mathematics, Statistics and Computer Science, Semnan University, P.O. Box: 35195–363, Semnan, Iran

2. 

Faculty of Mathematical Sciences, Ferdowsi University of Mashhad, P.O. Box: 9177948953, Mashhad, Iran

* Corresponding author: Saman Babaie–Kafaki

Received  November 2014 Revised  December 2015 Published  May 2016

Hybridizing the three–term conjugate gradient method proposed by Zhang et al. and the nonlinear conjugate gradient method proposed by Dai and Liao based on the scaled memoryless BFGS update, a one–parameter class of four–term conjugate gradient methods is proposed. It is shown that the suggested class of conjugate gradient methods possesses the sufficient descent property, without convexity assumption on the objective function. A brief global convergence analysis is made for uniformly convex objective functions. Results of numerical comparisons are reported. They demonstrate efficiency of a method of the proposed class in the sense of the Dolan–Moré performance profile.

Citation: Saman Babaie–Kafaki, Reza Ghanbari. A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update. Journal of Industrial & Management Optimization, 2017, 13 (2) : 649-658. doi: 10.3934/jimo.2016038
References:
[1]

N. Andrei, Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, European J. Oper. Res., 204 (2010), 410-420.  doi: 10.1016/j.ejor.2009.11.030.  Google Scholar

[2]

S. Babaie–Kafaki, On the sufficient descent condition of the Hager–Zhang conjugate gradient methods, 4OR, 12 (2014), 285-292.  doi: 10.1007/s10288-014-0255-6.  Google Scholar

[3]

S. Babaie–Kafaki and R. Ghanbari, A descent family of Dai–Liao conjugate gradient methods, Optim. Methods Softw., 29 (2014), 583-591.  doi: 10.1080/10556788.2013.833199.  Google Scholar

[4]

S. Babaie–Kafaki and R. Ghanbari, Two modified three–term conjugate gradient methods with sufficient descent property, Optim. Lett., 8 (2014), 2285-2297.  doi: 10.1007/s11590-014-0736-8.  Google Scholar

[5]

S. Babaie–Kafaki and R. Ghanbari, A hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate gradient methods based on a least–squares approach, Optim. Methods Softw., 30 (2015), 673-681.  doi: 10.1080/10556788.2014.966825.  Google Scholar

[6]

S. Babaie–KafakiR. Ghanbari and N. Mahdavi–Amiri, Two new conjugate gradient methods based on modified secant equations, J. Comput. Appl. Math., 234 (2010), 1374-1386.  doi: 10.1016/j.cam.2010.01.052.  Google Scholar

[7]

J. Barzilai and J. Borwein, Two–point stepsize gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.  Google Scholar

[8]

C. Broyden, The convergence of a class of double–rank minimization algorithms. Ⅱ. The new algorithm, J. Inst. Math. Appl., 6 (1970), 222-231.  doi: 10.1093/imamat/6.3.222.  Google Scholar

[9]

Y. Dai and C. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search, SIAM J. Optim., 23 (2013), 296-320.  doi: 10.1137/100813026.  Google Scholar

[10]

Y. Dai and L. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.  Google Scholar

[11]

E. Dolan and J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[12]

R. Fletcher, A new approach to variable metric algorithms, Comput. J., 13 (1970), 317-322.  doi: 10.1093/comjnl/13.3.317.  Google Scholar

[13]

J. Ford and I. Moghrabi, Multi–step quasi–Newton methods for optimization, J. Comput. Appl. Math., 50 (1994), 305-323.  doi: 10.1016/0377-0427(94)90309-3.  Google Scholar

[14]

J. FordY. Narushima and H. Yabe, Multi–step nonlinear conjugate gradient methods for unconstrained minimization, Comput. Optim. Appl., 40 (2008), 191-216.  doi: 10.1007/s10589-007-9087-z.  Google Scholar

[15]

D. Goldfarb, A family of variable metric methods derived by variational means, Math. Comp., 24 (1970), 23-26.  doi: 10.1090/S0025-5718-1970-0258249-6.  Google Scholar

[16]

N. GouldD. Orban and P. Toint, CUTEr: a constrained and unconstrained testing environment, revisited, ACM Trans. Math. Softw., 29 (2003), 353-372.  doi: 10.1145/962437.962438.  Google Scholar

[17]

W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192.  doi: 10.1137/030601880.  Google Scholar

[18]

W. Hager and H. Zhang, Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Softw., 32 (2006), 113-137.  doi: 10.1145/1132973.1132979.  Google Scholar

[19]

W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58.   Google Scholar

[20]

M. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), 409-436.  doi: 10.6028/jres.049.044.  Google Scholar

[21]

D. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.  Google Scholar

[22]

G. LiC. Tang and Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math., 202 (2007), 523-539.  doi: 10.1016/j.cam.2006.03.005.  Google Scholar

[23]

D. Liu and J. Nocedal, On the limited memory BFGS method for large–scale optimization, Math. Program., 45 (1989), 503-528.  doi: 10.1007/BF01589116.  Google Scholar

[24]

S. Oren, Self–scaling variable metric (SSVM) algorithms. Ⅱ. Implementation and experiments, Management Sci., 20 (1974), 863-874.   Google Scholar

[25]

S. Oren and D. Luenberger, Self–scaling variable metric (SSVM) algorithms. Ⅰ. Criteria and sufficient conditions for scaling a class of algorithms, Management Sci., 20 (1973/74), 845-862.   Google Scholar

[26]

S. Oren and E. Spedicato, Optimal conditioning of self–scaling variable metric algorithms, Math. Program., 10 (1976), 70-90.  doi: 10.1007/BF01580654.  Google Scholar

[27]

D. Shanno, Conditioning of quasi–Newton methods for function minimization, Math. Comp., 24 (1970), 647-656.  doi: 10.1090/S0025-5718-1970-0274029-X.  Google Scholar

[28]

K. SugikiY. Narushima and H. Yabe, Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl., 153 (2012), 733-757.  doi: 10.1007/s10957-011-9960-x.  Google Scholar

[29]

W. Sun and Y. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, New York, 2006.  Google Scholar

[30]

Z. WeiG. Li and L. Qi, New quasi–Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.  Google Scholar

[31]

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225.  doi: 10.1023/B:COAP.0000026885.81997.88.  Google Scholar

[32]

Y. Yuan, A modified BFGS algorithm for unconstrained optimization, IMA J. Numer. Anal., 11 (1991), 325-332.  doi: 10.1093/imanum/11.3.325.  Google Scholar

[33]

J. ZhangN. Deng and L. Chen, New quasi–Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl., 102 (1999), 147-167.  doi: 10.1023/A:1021898630001.  Google Scholar

[34]

L. ZhangW. Zhou and D. Li, Some descent three–term conjugate gradient methods and their global convergence, Optim. Methods Softw., 22 (2007), 697-711.  doi: 10.1080/10556780701223293.  Google Scholar

[35]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.  Google Scholar

show all references

References:
[1]

N. Andrei, Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization, European J. Oper. Res., 204 (2010), 410-420.  doi: 10.1016/j.ejor.2009.11.030.  Google Scholar

[2]

S. Babaie–Kafaki, On the sufficient descent condition of the Hager–Zhang conjugate gradient methods, 4OR, 12 (2014), 285-292.  doi: 10.1007/s10288-014-0255-6.  Google Scholar

[3]

S. Babaie–Kafaki and R. Ghanbari, A descent family of Dai–Liao conjugate gradient methods, Optim. Methods Softw., 29 (2014), 583-591.  doi: 10.1080/10556788.2013.833199.  Google Scholar

[4]

S. Babaie–Kafaki and R. Ghanbari, Two modified three–term conjugate gradient methods with sufficient descent property, Optim. Lett., 8 (2014), 2285-2297.  doi: 10.1007/s11590-014-0736-8.  Google Scholar

[5]

S. Babaie–Kafaki and R. Ghanbari, A hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate gradient methods based on a least–squares approach, Optim. Methods Softw., 30 (2015), 673-681.  doi: 10.1080/10556788.2014.966825.  Google Scholar

[6]

S. Babaie–KafakiR. Ghanbari and N. Mahdavi–Amiri, Two new conjugate gradient methods based on modified secant equations, J. Comput. Appl. Math., 234 (2010), 1374-1386.  doi: 10.1016/j.cam.2010.01.052.  Google Scholar

[7]

J. Barzilai and J. Borwein, Two–point stepsize gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148.  doi: 10.1093/imanum/8.1.141.  Google Scholar

[8]

C. Broyden, The convergence of a class of double–rank minimization algorithms. Ⅱ. The new algorithm, J. Inst. Math. Appl., 6 (1970), 222-231.  doi: 10.1093/imamat/6.3.222.  Google Scholar

[9]

Y. Dai and C. Kou, A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search, SIAM J. Optim., 23 (2013), 296-320.  doi: 10.1137/100813026.  Google Scholar

[10]

Y. Dai and L. Liao, New conjugacy conditions and related nonlinear conjugate gradient methods, Appl. Math. Optim., 43 (2001), 87-101.  doi: 10.1007/s002450010019.  Google Scholar

[11]

E. Dolan and J. Moré, Benchmarking optimization software with performance profiles, Math. Program., 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[12]

R. Fletcher, A new approach to variable metric algorithms, Comput. J., 13 (1970), 317-322.  doi: 10.1093/comjnl/13.3.317.  Google Scholar

[13]

J. Ford and I. Moghrabi, Multi–step quasi–Newton methods for optimization, J. Comput. Appl. Math., 50 (1994), 305-323.  doi: 10.1016/0377-0427(94)90309-3.  Google Scholar

[14]

J. FordY. Narushima and H. Yabe, Multi–step nonlinear conjugate gradient methods for unconstrained minimization, Comput. Optim. Appl., 40 (2008), 191-216.  doi: 10.1007/s10589-007-9087-z.  Google Scholar

[15]

D. Goldfarb, A family of variable metric methods derived by variational means, Math. Comp., 24 (1970), 23-26.  doi: 10.1090/S0025-5718-1970-0258249-6.  Google Scholar

[16]

N. GouldD. Orban and P. Toint, CUTEr: a constrained and unconstrained testing environment, revisited, ACM Trans. Math. Softw., 29 (2003), 353-372.  doi: 10.1145/962437.962438.  Google Scholar

[17]

W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J. Optim., 16 (2005), 170-192.  doi: 10.1137/030601880.  Google Scholar

[18]

W. Hager and H. Zhang, Algorithm 851: CG_Descent, a conjugate gradient method with guaranteed descent, ACM Trans. Math. Softw., 32 (2006), 113-137.  doi: 10.1145/1132973.1132979.  Google Scholar

[19]

W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pac. J. Optim., 2 (2006), 35-58.   Google Scholar

[20]

M. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Research Nat. Bur. Standards, 49 (1952), 409-436.  doi: 10.6028/jres.049.044.  Google Scholar

[21]

D. Li and M. Fukushima, A modified BFGS method and its global convergence in nonconvex minimization, J. Comput. Appl. Math., 129 (2001), 15-35.  doi: 10.1016/S0377-0427(00)00540-9.  Google Scholar

[22]

G. LiC. Tang and Z. Wei, New conjugacy condition and related new conjugate gradient methods for unconstrained optimization, J. Comput. Appl. Math., 202 (2007), 523-539.  doi: 10.1016/j.cam.2006.03.005.  Google Scholar

[23]

D. Liu and J. Nocedal, On the limited memory BFGS method for large–scale optimization, Math. Program., 45 (1989), 503-528.  doi: 10.1007/BF01589116.  Google Scholar

[24]

S. Oren, Self–scaling variable metric (SSVM) algorithms. Ⅱ. Implementation and experiments, Management Sci., 20 (1974), 863-874.   Google Scholar

[25]

S. Oren and D. Luenberger, Self–scaling variable metric (SSVM) algorithms. Ⅰ. Criteria and sufficient conditions for scaling a class of algorithms, Management Sci., 20 (1973/74), 845-862.   Google Scholar

[26]

S. Oren and E. Spedicato, Optimal conditioning of self–scaling variable metric algorithms, Math. Program., 10 (1976), 70-90.  doi: 10.1007/BF01580654.  Google Scholar

[27]

D. Shanno, Conditioning of quasi–Newton methods for function minimization, Math. Comp., 24 (1970), 647-656.  doi: 10.1090/S0025-5718-1970-0274029-X.  Google Scholar

[28]

K. SugikiY. Narushima and H. Yabe, Globally convergent three–term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization, J. Optim. Theory Appl., 153 (2012), 733-757.  doi: 10.1007/s10957-011-9960-x.  Google Scholar

[29]

W. Sun and Y. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, New York, 2006.  Google Scholar

[30]

Z. WeiG. Li and L. Qi, New quasi–Newton methods for unconstrained optimization problems, Appl. Math. Comput., 175 (2006), 1156-1188.  doi: 10.1016/j.amc.2005.08.027.  Google Scholar

[31]

H. Yabe and M. Takano, Global convergence properties of nonlinear conjugate gradient methods with modified secant condition, Comput. Optim. Appl., 28 (2004), 203-225.  doi: 10.1023/B:COAP.0000026885.81997.88.  Google Scholar

[32]

Y. Yuan, A modified BFGS algorithm for unconstrained optimization, IMA J. Numer. Anal., 11 (1991), 325-332.  doi: 10.1093/imanum/11.3.325.  Google Scholar

[33]

J. ZhangN. Deng and L. Chen, New quasi–Newton equation and related methods for unconstrained optimization, J. Optim. Theory Appl., 102 (1999), 147-167.  doi: 10.1023/A:1021898630001.  Google Scholar

[34]

L. ZhangW. Zhou and D. Li, Some descent three–term conjugate gradient methods and their global convergence, Optim. Methods Softw., 22 (2007), 697-711.  doi: 10.1080/10556780701223293.  Google Scholar

[35]

W. Zhou and L. Zhang, A nonlinear conjugate gradient method based on the MBFGS secant condition, Optim. Methods Softw., 21 (2006), 707-714.  doi: 10.1080/10556780500137041.  Google Scholar

Figure 1.  Total number of function and gradient evaluations performance profiles for NMDL1, NMDL2, NMDL3 and MDL
Figure 2.  CPU time performance profiles for NMDL1, NMDL2, NMDL3 and MDL
[1]

Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024

[2]

Min Li. A three term Polak-Ribière-Polyak conjugate gradient method close to the memoryless BFGS quasi-Newton method. Journal of Industrial & Management Optimization, 2020, 16 (1) : 245-260. doi: 10.3934/jimo.2018149

[3]

Boris Kramer, John R. Singler. A POD projection method for large-scale algebraic Riccati equations. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 413-435. doi: 10.3934/naco.2016018

[4]

Mohsen Abdolhosseinzadeh, Mir Mohammad Alipour. Design of experiment for tuning parameters of an ant colony optimization method for the constrained shortest Hamiltonian path problem in the grid networks. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 321-332. doi: 10.3934/naco.2020028

[5]

Eduardo Casas, Christian Clason, Arnd Rösch. Preface special issue on system modeling and optimization. Mathematical Control & Related Fields, 2021  doi: 10.3934/mcrf.2021008

[6]

Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023

[7]

Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399

[8]

Jiangxing Wang. Convergence analysis of an accurate and efficient method for nonlinear Maxwell's equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2429-2440. doi: 10.3934/dcdsb.2020185

[9]

Abdulrazzaq T. Abed, Azzam S. Y. Aladool. Applying particle swarm optimization based on Padé approximant to solve ordinary differential equation. Numerical Algebra, Control & Optimization, 2021  doi: 10.3934/naco.2021008

[10]

Reza Lotfi, Yahia Zare Mehrjerdi, Mir Saman Pishvaee, Ahmad Sadeghieh, Gerhard-Wilhelm Weber. A robust optimization model for sustainable and resilient closed-loop supply chain network design considering conditional value at risk. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 221-253. doi: 10.3934/naco.2020023

[11]

Tuan Hiep Pham, Jérôme Laverne, Jean-Jacques Marigo. Stress gradient effects on the nucleation and propagation of cohesive cracks. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 557-584. doi: 10.3934/dcdss.2016012

[12]

Matthias Erbar, Jan Maas. Gradient flow structures for discrete porous medium equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1355-1374. doi: 10.3934/dcds.2014.34.1355

[13]

J. Frédéric Bonnans, Justina Gianatti, Francisco J. Silva. On the convergence of the Sakawa-Shindo algorithm in stochastic control. Mathematical Control & Related Fields, 2016, 6 (3) : 391-406. doi: 10.3934/mcrf.2016008

[14]

Rafael Luís, Sandra Mendonça. A note on global stability in the periodic logistic map. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4211-4220. doi: 10.3934/dcdsb.2020094

[15]

Fernando P. da Costa, João T. Pinto, Rafael Sasportes. On the convergence to critical scaling profiles in submonolayer deposition models. Kinetic & Related Models, 2018, 11 (6) : 1359-1376. doi: 10.3934/krm.2018053

[16]

Alberto Bressan, Carlotta Donadello. On the convergence of viscous approximations after shock interactions. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 29-48. doi: 10.3934/dcds.2009.23.29

[17]

Lakmi Niwanthi Wadippuli, Ivan Gudoshnikov, Oleg Makarenkov. Global asymptotic stability of nonconvex sweeping processes. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1129-1139. doi: 10.3934/dcdsb.2019212

[18]

Caifang Wang, Tie Zhou. The order of convergence for Landweber Scheme with $\alpha,\beta$-rule. Inverse Problems & Imaging, 2012, 6 (1) : 133-146. doi: 10.3934/ipi.2012.6.133

[19]

Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258

[20]

Andrea Cianchi, Adele Ferone. Improving sharp Sobolev type inequalities by optimal remainder gradient norms. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1363-1386. doi: 10.3934/cpaa.2012.11.1363

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (124)
  • HTML views (413)
  • Cited by (1)

Other articles
by authors

[Back to Top]