April  2018, 14(2): 785-801. doi: 10.3934/jimo.2017075

A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization

Department of Applied Mathematics, Hainan University, Haikou 570228, China

* Corresponding author: Yigui Ou

Received  January 2017 Revised  March 2017 Published  September 2017

Fund Project: The work is supported by NNSF of China (No. 11261015) and NSF of Hainan Province (No. 2016CXTD004; No. 111001; No. 117011).

This paper presents a nonmonotone scaled memoryless BFGS preconditioned conjugate gradient algorithm for solving nonsmooth convex optimization problems, which combines the idea of scaled memoryless BFGS preconditioned conjugate gradient method with the nonmonotone technique and the Moreau-Yosida regularization. The proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. Under mild conditions, the global convergence of the proposed method is established. Preliminary numerical results and related comparisons show that the proposed method can be applied to solve large scale nonsmooth convex optimization problems.

Citation: Yigui Ou, Xin Zhou. A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2018, 14 (2) : 785-801. doi: 10.3934/jimo.2017075
References:
[1]

N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Computational Optimization and Applications, 38 (2007), 401-416.  doi: 10.1007/s10589-007-9055-7.  Google Scholar

[2]

A. Auslender, Numerical methods for nondifferentiable convex optimization, Mathematical Programming Study, 30 (1987), 102-126.   Google Scholar

[3]

S. Babaie-Kafaki, A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization, 4OR, A Quarterly Journal of Operations Research, 11 (2013), 361-374.  doi: 10.1007/s10288-013-0233-4.  Google Scholar

[4]

S. Babaie-Kafaki and R. Chanbari, A class of descent four-term extension of the Dai-Liao conjugate gradient method based on the scaled memoryless BFGS update, Journal of Industrial and Management Optimization, 13 (2017), 649-658.  doi: 10.3934/jimo.2016038.  Google Scholar

[5]

J. Barzilai and J. M. Borwein, Two-point stepsize gradient methods, IMA Journal of Numerical Analysis, 8 (1988), 141-148.   Google Scholar

[6]

E. Birgin and J. M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Applied Mathematics and Optimization, 43 (2001), 117-128.  doi: 10.1007/s00245-001-0003-0.  Google Scholar

[7]

J. F. BonnansJ. C. GilbertC. Lemarechal and C. Sagastizabal, A family of variable-metric proximal methods, Mathematical Programming, 68 (1995), 15-47.  doi: 10.1007/BF01585756.  Google Scholar

[8]

J. V. Burke and M. Qian, On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating, Mathematical Programming, 88 (2000), 157-181.  doi: 10.1007/PL00011373.  Google Scholar

[9]

X. Chen and M. Fukushima, Proximal quasi-Newton methods for nondifferentiable convex optimization, Mathematical Programming, 85 (1999), 313-334.  doi: 10.1007/s101070050059.  Google Scholar

[10]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming, Serial A, 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[11]

M. Fukushima, A descent algorithm for nonsmooth convex optimization, Mathematical Programming, 30 (1984), 163-175.  doi: 10.1007/BF02591883.  Google Scholar

[12]

M. Fukushima and L. Q. Qi, A globally and superlinearly convergent algorithm for nonsmooth convex minimization, SIAM Journal on Optimization, 6 (1996), 1106-1120.  doi: 10.1137/S1052623494278839.  Google Scholar

[13]

M. HaaralaK. Miettinen and M. M. Mäkelä, New limited memory bundle method for large-scale nonsmooth optimization, Optimization Methods and Software, 19 (2004), 673-692.  doi: 10.1080/10556780410001689225.  Google Scholar

[14]

M. HaaralaK. Miettinen and M. M. Mäkelä, Globally convergent limited memory bundle method for large-scale nonsmooth optimization, Mathematical Programming, 109 (2007), 181-205.  doi: 10.1007/s10107-006-0728-2.  Google Scholar

[15]

W. W. Hager and H. C. Zhang, A survey of nonlinear conjugate gradient methods, Pacific Journal of Optimization, 2 (2006), 35-58.   Google Scholar

[16]

J. B. Hiriart-Urruty and C. Lemaréchal, Convex Analysis and Minimization Algorithms, Springer, Berlin, 1993. doi: 10.1007/978-3-662-02796-7.  Google Scholar

[17]

C. Lemarechal and C. Sagastizabal, Practical aspects of the Moreau-Yosida regularization, I: Theoretical preliminaries, SIAM Journal on Optimization, 7 (1997), 367-385.   Google Scholar

[18]

Q. Li, Conjugate gradient type methods for the nondifferentiable convex minimization, Optimization Letters, 7 (2013), 533-545.   Google Scholar

[19]

D. H. Li and M. Fukushima, A derivative-free line search and global convergence of Broyden-like method for nonlinear equations, Optimization Methods and Software, 13 (2000), 181-201.   Google Scholar

[20]

S. LuZ. X. Wei and L. Li, A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization, Computational Optimization and Applications, 51 (2012), 551-573.   Google Scholar

[21]

L. Luk$\check{s}$an and J. Vl$\check{c}$ek, Test Problems for Nonsmooth Unconstrained and Linearly Constrained Optimization, Technical Report No. 798, Institute of Computer Science, Academy of Sciences of the Czech Republic, 2000. Google Scholar

[22]

R. Mifflin, A quasi-second-order proximal bundle algorithm, Mathematical Programming, 73 (1996), 51-72.   Google Scholar

[23]

Y. G. Ou and H. C. Lin, An ODE-like nonmonotone method for nonsmooth convex optimization, Journal of Applied Mathematics and Computing, 52 (2016), 265-285.   Google Scholar

[24]

L. Q. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research, 18 (1993), 227-244.  doi: 10.1287/moor.18.1.227.  Google Scholar

[25]

A. I. Rauf and M. Fukushima, A globally convergent BFGS method for nonsmooth convex optimization, Journal of Optimization Theory and Applications, 104 (2000), 539-558.   Google Scholar

[26]

N. Sagara and M. Fukushima, A trust region method for nonsmooth convex optimization, Journal of Industrial and Management Optimization, 1 (2005), 171-180.   Google Scholar

[27]

D. F. Shanno, On the convergence of a new conjugate gradient algorithm, SIAM Journal on Numerical Analysis, 15 (1978), 1247-1257.   Google Scholar

[28]

J. Shen, L. P. Pang and D. Li, An approximate quasi-Newton bundle-type method for nonsmooth optimization, Abstract and Applied Analysis, 2013, Art. ID 697474, 7 pp. doi: 10.1155/2013/697474.  Google Scholar

[29]

W. Y. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, New York, 2006.  Google Scholar

[30]

G. L. YuanZ. H. Meng and Y. Li, A modified Hestenes and Stiefel conjugate gradient algorithm for large scale nonsmooth minimizations and nonlinear equations, Journal of Optimization Theory and Applications, 168 (2016), 129-152.   Google Scholar

[31]

G. L. Yuan, Z. Sheng and W. J. Liu, The modified HZ conjugate gradient algorithm for large scale nonsmooth optimization Plos One, 11(2016), e0164289, 15pp. doi: 10.1371/journal.pone.0164289.  Google Scholar

[32]

G. L. Yuan and Z. X. Wei, The Barzilai and Borwein gradient method with nonmonotone line search for nonsmooth convex optimization problems, Mathematical Modelling and Analysis, 17 (2012), 203-216.   Google Scholar

[33]

G. L. YuanZ. X. Wei and G. Y. Li, A modified Polak-Ribiére-Polyak conjugate gradient algorithm for nonsmooth convex programs, Journal of Computational and Applied mathematics, 255 (2014), 86-96.   Google Scholar

[34]

G. L. YuanZ. X. Wei and Z. X. Wang, Gradient trust region algorithm with limited memory BFGS update for nonsmooth convex minization, Computational Optimization and Applications, 54 (2013), 45-64.   Google Scholar

[35]

H. C. Zhang and W. W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM Jpournal on Optimization, 14 (2004), 1043-1056.  doi: 10.1137/S1052623403428208.  Google Scholar

[36]

L. ZhangW. J. Zhou and D. H. Li, A descent modified Polak-Ribiére-Polyak conjugate gradient method and its global convergence, IMA Journal of Numerical Analysis, 26 (2006), 629-640.  doi: 10.1093/imanum/drl016.  Google Scholar

show all references

References:
[1]

N. Andrei, Scaled conjugate gradient algorithms for unconstrained optimization, Computational Optimization and Applications, 38 (2007), 401-416.  doi: 10.1007/s10589-007-9055-7.  Google Scholar

[2]

A. Auslender, Numerical methods for nondifferentiable convex optimization, Mathematical Programming Study, 30 (1987), 102-126.   Google Scholar

[3]

S. Babaie-Kafaki, A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization, 4OR, A Quarterly Journal of Operations Research, 11 (2013), 361-374.  doi: 10.1007/s10288-013-0233-4.  Google Scholar

[4]

S. Babaie-Kafaki and R. Chanbari, A class of descent four-term extension of the Dai-Liao conjugate gradient method based on the scaled memoryless BFGS update, Journal of Industrial and Management Optimization, 13 (2017), 649-658.  doi: 10.3934/jimo.2016038.  Google Scholar

[5]

J. Barzilai and J. M. Borwein, Two-point stepsize gradient methods, IMA Journal of Numerical Analysis, 8 (1988), 141-148.   Google Scholar

[6]

E. Birgin and J. M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Applied Mathematics and Optimization, 43 (2001), 117-128.  doi: 10.1007/s00245-001-0003-0.  Google Scholar

[7]

J. F. BonnansJ. C. GilbertC. Lemarechal and C. Sagastizabal, A family of variable-metric proximal methods, Mathematical Programming, 68 (1995), 15-47.  doi: 10.1007/BF01585756.  Google Scholar

[8]

J. V. Burke and M. Qian, On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating, Mathematical Programming, 88 (2000), 157-181.  doi: 10.1007/PL00011373.  Google Scholar

[9]

X. Chen and M. Fukushima, Proximal quasi-Newton methods for nondifferentiable convex optimization, Mathematical Programming, 85 (1999), 313-334.  doi: 10.1007/s101070050059.  Google Scholar

[10]

E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Mathematical Programming, Serial A, 91 (2002), 201-213.  doi: 10.1007/s101070100263.  Google Scholar

[11]

M. Fukushima, A descent algorithm for nonsmooth convex optimization, Mathematical Programming, 30 (1984), 163-175.  doi: 10.1007/BF02591883.  Google Scholar

[12]

M. Fukushima and L. Q. Qi, A globally and superlinearly convergent algorithm for nonsmooth convex minimization, SIAM Journal on Optimization, 6 (1996), 1106-1120.  doi: 10.1137/S1052623494278839.  Google Scholar

[13]

M. HaaralaK. Miettinen and M. M. Mäkelä, New limited memory bundle method for large-scale nonsmooth optimization, Optimization Methods and Software, 19 (2004), 673-692.  doi: 10.1080/10556780410001689225.  Google Scholar

[14]

M. HaaralaK. Miettinen and M. M. Mäkelä, Globally convergent limited memory bundle method for large-scale nonsmooth optimization, Mathematical Programming, 109 (2007), 181-205.  doi: 10.1007/s10107-006-0728-2.  Google Scholar

[15]

W. W. Hager and H. C. Zhang, A survey of nonlinear conjugate gradient methods, Pacific Journal of Optimization, 2 (2006), 35-58.   Google Scholar

[16]

J. B. Hiriart-Urruty and C. Lemaréchal, Convex Analysis and Minimization Algorithms, Springer, Berlin, 1993. doi: 10.1007/978-3-662-02796-7.  Google Scholar

[17]

C. Lemarechal and C. Sagastizabal, Practical aspects of the Moreau-Yosida regularization, I: Theoretical preliminaries, SIAM Journal on Optimization, 7 (1997), 367-385.   Google Scholar

[18]

Q. Li, Conjugate gradient type methods for the nondifferentiable convex minimization, Optimization Letters, 7 (2013), 533-545.   Google Scholar

[19]

D. H. Li and M. Fukushima, A derivative-free line search and global convergence of Broyden-like method for nonlinear equations, Optimization Methods and Software, 13 (2000), 181-201.   Google Scholar

[20]

S. LuZ. X. Wei and L. Li, A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization, Computational Optimization and Applications, 51 (2012), 551-573.   Google Scholar

[21]

L. Luk$\check{s}$an and J. Vl$\check{c}$ek, Test Problems for Nonsmooth Unconstrained and Linearly Constrained Optimization, Technical Report No. 798, Institute of Computer Science, Academy of Sciences of the Czech Republic, 2000. Google Scholar

[22]

R. Mifflin, A quasi-second-order proximal bundle algorithm, Mathematical Programming, 73 (1996), 51-72.   Google Scholar

[23]

Y. G. Ou and H. C. Lin, An ODE-like nonmonotone method for nonsmooth convex optimization, Journal of Applied Mathematics and Computing, 52 (2016), 265-285.   Google Scholar

[24]

L. Q. Qi, Convergence analysis of some algorithms for solving nonsmooth equations, Mathematics of Operations Research, 18 (1993), 227-244.  doi: 10.1287/moor.18.1.227.  Google Scholar

[25]

A. I. Rauf and M. Fukushima, A globally convergent BFGS method for nonsmooth convex optimization, Journal of Optimization Theory and Applications, 104 (2000), 539-558.   Google Scholar

[26]

N. Sagara and M. Fukushima, A trust region method for nonsmooth convex optimization, Journal of Industrial and Management Optimization, 1 (2005), 171-180.   Google Scholar

[27]

D. F. Shanno, On the convergence of a new conjugate gradient algorithm, SIAM Journal on Numerical Analysis, 15 (1978), 1247-1257.   Google Scholar

[28]

J. Shen, L. P. Pang and D. Li, An approximate quasi-Newton bundle-type method for nonsmooth optimization, Abstract and Applied Analysis, 2013, Art. ID 697474, 7 pp. doi: 10.1155/2013/697474.  Google Scholar

[29]

W. Y. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, New York, 2006.  Google Scholar

[30]

G. L. YuanZ. H. Meng and Y. Li, A modified Hestenes and Stiefel conjugate gradient algorithm for large scale nonsmooth minimizations and nonlinear equations, Journal of Optimization Theory and Applications, 168 (2016), 129-152.   Google Scholar

[31]

G. L. Yuan, Z. Sheng and W. J. Liu, The modified HZ conjugate gradient algorithm for large scale nonsmooth optimization Plos One, 11(2016), e0164289, 15pp. doi: 10.1371/journal.pone.0164289.  Google Scholar

[32]

G. L. Yuan and Z. X. Wei, The Barzilai and Borwein gradient method with nonmonotone line search for nonsmooth convex optimization problems, Mathematical Modelling and Analysis, 17 (2012), 203-216.   Google Scholar

[33]

G. L. YuanZ. X. Wei and G. Y. Li, A modified Polak-Ribiére-Polyak conjugate gradient algorithm for nonsmooth convex programs, Journal of Computational and Applied mathematics, 255 (2014), 86-96.   Google Scholar

[34]

G. L. YuanZ. X. Wei and Z. X. Wang, Gradient trust region algorithm with limited memory BFGS update for nonsmooth convex minization, Computational Optimization and Applications, 54 (2013), 45-64.   Google Scholar

[35]

H. C. Zhang and W. W. Hager, A nonmonotone line search technique and its application to unconstrained optimization, SIAM Jpournal on Optimization, 14 (2004), 1043-1056.  doi: 10.1137/S1052623403428208.  Google Scholar

[36]

L. ZhangW. J. Zhou and D. H. Li, A descent modified Polak-Ribiére-Polyak conjugate gradient method and its global convergence, IMA Journal of Numerical Analysis, 26 (2006), 629-640.  doi: 10.1093/imanum/drl016.  Google Scholar

Figure 1.  Performance profiles based on iterations (left) and function evaluations (right)
Figure 2.  Performance profiles based on iterations (left) and function evaluations (right)
Table 1.  Testing functions for small scale problems
No.Functions$n$$x_0$${{f}_{ops}}\left( x \right)$
1Rosenbrock2(-1.2; 1)0
2Crescent2(-1.5; 2)0
3CB22(1; -0.1)1.9522245
4CB32(2; 2)2.0
5DEM2(1; 1)-3
6QL2(-1; 5)7.2
7LQ2(-0.5; -0.5)-1.4142136
8Mifflin 12(0.8; 0.6)-1.0
9Mifflin 22(-1; -1)-1.0
10Wolfe2(3; 2)-8
11Rosen-Suzuki4(0; 0; 0; 0)-44
12Shor5(0; 0; 0; 0; 1)22.600162
No.Functions$n$$x_0$${{f}_{ops}}\left( x \right)$
1Rosenbrock2(-1.2; 1)0
2Crescent2(-1.5; 2)0
3CB22(1; -0.1)1.9522245
4CB32(2; 2)2.0
5DEM2(1; 1)-3
6QL2(-1; 5)7.2
7LQ2(-0.5; -0.5)-1.4142136
8Mifflin 12(0.8; 0.6)-1.0
9Mifflin 22(-1; -1)-1.0
10Wolfe2(3; 2)-8
11Rosen-Suzuki4(0; 0; 0; 0)-44
12Shor5(0; 0; 0; 0; 1)22.600162
Table 2.  Numerical results for five algorithms
No.Algorithn 3.1LWTRYWBBSFTRRFBFGS
13/4/6/13/54/56/48/49/4/5/
2.6178e-90.2976e-73.4484e-77.1545e-46.2072e-10
23/4/3/7/14/16/31/32/35/36/
3.3026e-66.5430e-42.7450e-51.6000e-33.0590e-7
34/5/5/12/13/15/54/55/5/6/
1.95221.95221.95221.95731.9522
42/3/6/13/4/8/55/562/3/
2.00002.00002.00002.01002.0076
53/4/8/16/4/7/5/6/3/4
-3.0000-3.0000-3.0000-3.0000-3.0000
611/12/4/9/22/25/48/49/10/11/
7.20007.20007.20007.20037.2000
73/4/5/10/6/7/3/4/2/3/
-1.4142-1.4142-1.4142-1.4118-1.4033
834/35/15/31/3/6/59/60/57/58/
-1.0000-1.0000-0.9938-1.0000-1.0000
92/3/8/16/12/13/4/5/2/3/
-1.0000-1.0000-0.9999-0.9997-0.9813
103/4/12/24/9/12/43/46/4/5/
-8.0000-8.0000-8.0000-8.0000-8.0000
1149/106/20/40/8/9/60/61/25/31/
-43.9999-44.0000-43.9493-39.9924-43.9982
1214/16/14/28/9/10/71/72/66/152/
22.601922.600222.600422.689222.6017
No.Algorithn 3.1LWTRYWBBSFTRRFBFGS
13/4/6/13/54/56/48/49/4/5/
2.6178e-90.2976e-73.4484e-77.1545e-46.2072e-10
23/4/3/7/14/16/31/32/35/36/
3.3026e-66.5430e-42.7450e-51.6000e-33.0590e-7
34/5/5/12/13/15/54/55/5/6/
1.95221.95221.95221.95731.9522
42/3/6/13/4/8/55/562/3/
2.00002.00002.00002.01002.0076
53/4/8/16/4/7/5/6/3/4
-3.0000-3.0000-3.0000-3.0000-3.0000
611/12/4/9/22/25/48/49/10/11/
7.20007.20007.20007.20037.2000
73/4/5/10/6/7/3/4/2/3/
-1.4142-1.4142-1.4142-1.4118-1.4033
834/35/15/31/3/6/59/60/57/58/
-1.0000-1.0000-0.9938-1.0000-1.0000
92/3/8/16/12/13/4/5/2/3/
-1.0000-1.0000-0.9999-0.9997-0.9813
103/4/12/24/9/12/43/46/4/5/
-8.0000-8.0000-8.0000-8.0000-8.0000
1149/106/20/40/8/9/60/61/25/31/
-43.9999-44.0000-43.9493-39.9924-43.9982
1214/16/14/28/9/10/71/72/66/152/
22.601922.600222.600422.689222.6017
Table 3.  Testing functions for large scale problems
No.FunctionsInitial points $x_0$
1Generalization of MAXQ $(1, 2, \cdots, \frac{n}{2}, -\frac{n}{2}-1, \cdots, -n)$
2Generalization of MXHILB $(1, 1, \cdots, 1)$
3Chained LQ $(-0.5, -0.5, \cdots, -0.5)$
4Number of active faces $(1, 1, \cdots, 1)$
5Nonsmooth generalization of Brown 2 $(1, 0, 1, 0, \cdots)$
6Chained Mifflin 2 $(-1, -1, \cdots, -1)$
7Chained Crescent Ⅰ $(-1.5, 2, -1.5, 2, \cdots)$
8Chained Crescent Ⅱ $(1, 0, 1, 0 \cdots)$
No.FunctionsInitial points $x_0$
1Generalization of MAXQ $(1, 2, \cdots, \frac{n}{2}, -\frac{n}{2}-1, \cdots, -n)$
2Generalization of MXHILB $(1, 1, \cdots, 1)$
3Chained LQ $(-0.5, -0.5, \cdots, -0.5)$
4Number of active faces $(1, 1, \cdots, 1)$
5Nonsmooth generalization of Brown 2 $(1, 0, 1, 0, \cdots)$
6Chained Mifflin 2 $(-1, -1, \cdots, -1)$
7Chained Crescent Ⅰ $(-1.5, 2, -1.5, 2, \cdots)$
8Chained Crescent Ⅱ $(1, 0, 1, 0 \cdots)$
Table 4.  results for Algorithms 3.1 and CG-YWL
No.nAlgorithn 3.1CG-YWL
11000186/1601/2.6568e-10225/4710/6.9354e-8
5000242/2725/1.2183e-10250/5235/6.8798e-8
10000253/2997/3.4045e-10261/5466/6.6528e-8
2100094/1301/4.0582e-991/1482/8.2738e-9
5000119/1499/2.6731e-9111/1938/9.7206e-9
10000129/1901/2.1905e-9120/2127/5.8524e-9
3100037/110/2.3278e-937/114/7.2687e-9
500039/116/5.9987e-939/120/9.0932e-9
1000040/121/1.6943e-940/123/9.0941e-9
4100071/891/3.6735e-1177/1026/6.8037e-9
500082/937/7.9864e-1190/1281/7.8405e-9
1000086/1208/5.7163e-1196/1401/9.9366e-9
5100035/114/1.5021e-1138/117/7.2687e-9
500039/120/5.2352e-1140/123/9.0932e-9
1000041/126/2.1136e-1141/125/1.8188e-8
6100038/116/-5.3979e+337/114/-2.4975e+4
500041/123/-3.2153e+439/120/-1.2498e+5
1000043/129/-2.0127e+440/123/-2.4998e+5
7100034/101/7.0463e-1137/114/5.4897e-9
500036/113/3.8145e-1139/120/6.8294e-9
1000039/120/4.3654e-1140/123/6.8253e-9
8100037/112/6.0424e-1139/120/6.8185e-9
500039/121/1.8205e-1141/126/8.5258e-9
1000042/125/2.6473e-1142/129/8.5262e-9
No.nAlgorithn 3.1CG-YWL
11000186/1601/2.6568e-10225/4710/6.9354e-8
5000242/2725/1.2183e-10250/5235/6.8798e-8
10000253/2997/3.4045e-10261/5466/6.6528e-8
2100094/1301/4.0582e-991/1482/8.2738e-9
5000119/1499/2.6731e-9111/1938/9.7206e-9
10000129/1901/2.1905e-9120/2127/5.8524e-9
3100037/110/2.3278e-937/114/7.2687e-9
500039/116/5.9987e-939/120/9.0932e-9
1000040/121/1.6943e-940/123/9.0941e-9
4100071/891/3.6735e-1177/1026/6.8037e-9
500082/937/7.9864e-1190/1281/7.8405e-9
1000086/1208/5.7163e-1196/1401/9.9366e-9
5100035/114/1.5021e-1138/117/7.2687e-9
500039/120/5.2352e-1140/123/9.0932e-9
1000041/126/2.1136e-1141/125/1.8188e-8
6100038/116/-5.3979e+337/114/-2.4975e+4
500041/123/-3.2153e+439/120/-1.2498e+5
1000043/129/-2.0127e+440/123/-2.4998e+5
7100034/101/7.0463e-1137/114/5.4897e-9
500036/113/3.8145e-1139/120/6.8294e-9
1000039/120/4.3654e-1140/123/6.8253e-9
8100037/112/6.0424e-1139/120/6.8185e-9
500039/121/1.8205e-1141/126/8.5258e-9
1000042/125/2.6473e-1142/129/8.5262e-9
[1]

Min Li. A three term Polak-Ribière-Polyak conjugate gradient method close to the memoryless BFGS quasi-Newton method. Journal of Industrial & Management Optimization, 2020, 16 (1) : 245-260. doi: 10.3934/jimo.2018149

[2]

Qiang Guo, Dong Liang. An adaptive wavelet method and its analysis for parabolic equations. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 327-345. doi: 10.3934/naco.2013.3.327

[3]

Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271

[4]

Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024

[5]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[6]

Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017

[7]

Boris Kramer, John R. Singler. A POD projection method for large-scale algebraic Riccati equations. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 413-435. doi: 10.3934/naco.2016018

[8]

Petra Csomós, Hermann Mena. Fourier-splitting method for solving hyperbolic LQR problems. Numerical Algebra, Control & Optimization, 2018, 8 (1) : 17-46. doi: 10.3934/naco.2018002

[9]

Christina Surulescu, Nicolae Surulescu. Modeling and simulation of some cell dispersion problems by a nonparametric method. Mathematical Biosciences & Engineering, 2011, 8 (2) : 263-277. doi: 10.3934/mbe.2011.8.263

[10]

Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83

[11]

Xiaomao Deng, Xiao-Chuan Cai, Jun Zou. A parallel space-time domain decomposition method for unsteady source inversion problems. Inverse Problems & Imaging, 2015, 9 (4) : 1069-1091. doi: 10.3934/ipi.2015.9.1069

[12]

Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056

[13]

Jan Prüss, Laurent Pujo-Menjouet, G.F. Webb, Rico Zacher. Analysis of a model for the dynamics of prions. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 225-235. doi: 10.3934/dcdsb.2006.6.225

[14]

Jia Cai, Guanglong Xu, Zhensheng Hu. Sketch-based image retrieval via CAT loss with elastic net regularization. Mathematical Foundations of Computing, 2020, 3 (4) : 219-227. doi: 10.3934/mfc.2020013

[15]

Tuan Hiep Pham, Jérôme Laverne, Jean-Jacques Marigo. Stress gradient effects on the nucleation and propagation of cohesive cracks. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 557-584. doi: 10.3934/dcdss.2016012

[16]

Matthias Erbar, Jan Maas. Gradient flow structures for discrete porous medium equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1355-1374. doi: 10.3934/dcds.2014.34.1355

[17]

Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023

[18]

Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399

[19]

J. Frédéric Bonnans, Justina Gianatti, Francisco J. Silva. On the convergence of the Sakawa-Shindo algorithm in stochastic control. Mathematical Control & Related Fields, 2016, 6 (3) : 391-406. doi: 10.3934/mcrf.2016008

[20]

Fernando P. da Costa, João T. Pinto, Rafael Sasportes. On the convergence to critical scaling profiles in submonolayer deposition models. Kinetic & Related Models, 2018, 11 (6) : 1359-1376. doi: 10.3934/krm.2018053

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (172)
  • HTML views (816)
  • Cited by (0)

Other articles
by authors

[Back to Top]