
-
Previous Article
Cluster, classify, regress: A general method for learning discontinuous functions
- FoDS Home
- This Issue
-
Next Article
On the incorporation of box-constraints for ensemble Kalman inversion
Partitioned integrators for thermodynamic parameterization of neural networks
School of Mathematics and Maxwell Institute for the Mathematical Sciences, University of Edinburgh, Edinburgh EH9 3FD, United Kingdom |
Traditionally, neural networks are parameterized using optimization procedures such as stochastic gradient descent, RMSProp and ADAM. These procedures tend to drive the parameters of the network toward a local minimum. In this article, we employ alternative "sampling" algorithms (referred to here as "thermodynamic parameterization methods") which rely on discretized stochastic differential equations for a defined target distribution on parameter space. We show that the thermodynamic perspective already improves neural network training. Moreover, by partitioning the parameters based on natural layer structure we obtain schemes with very rapid convergence for data sets with complicated loss landscapes.
We describe easy-to-implement hybrid partitioned numerical algorithms, based on discretized stochastic differential equations, which are adapted to feed-forward neural networks, including a multi-layer Langevin algorithm, AdLaLa (combining the adaptive Langevin and Langevin algorithms) and LOL (combining Langevin and Overdamped Langevin); we examine the convergence of these methods using numerical studies and compare their performance among themselves and in relation to standard alternatives such as stochastic gradient descent and ADAM. We present evidence that thermodynamic parameterization methods can be (ⅰ) faster, (ⅱ) more accurate, and (ⅲ) more robust than standard algorithms used within machine learning frameworks.
References:
[1] |
A. Avati, K. Jung, S. Harman, L. Downing, A. Ng and N. Shah, Improving palliative care with deep learning, 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), (2017).
doi: 10.1109/BIBM.2017.8217669. |
[2] |
A. J. Ballard, R. Das, S. Martiniani, D. Mehta, L. Sagun, J. D. Stevenson and D. J. Wales,
Energy landscapes for machine learning, Phys. Chem. Chem. Phys., 19 (2017), 12585-12603.
doi: 10.1039/C7CP01108C. |
[3] |
N. Brosse, A. Durmus and E. Moulines, The promises and pitfalls of stochastic gradient Langevin dynamics, NIPS, (2018), 8268–8278. |
[4] |
A. Choromanska, M. Henaff, M. Mathieu, G. Arous and Y. LeCun,
The loss surfaces of multilayer networks, Journal of Machine Learning Research, 38 (2015), 192-204.
|
[5] |
Y. Dauphin, R. Pascanum C. Gülçehre, K. Cho, S. Ganguli and Y. Bengio, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, NIPS, (2014). |
[6] |
N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel and H. Neven, Bayesian sampling using stochastic gradient thermostats, NIPS, (2014), 3203–3211. |
[7] |
J. Dolbeault, C. Mouhot and C. Schmeiser,
Hypocoercivity for kinetic equations with linear relaxation terms, C. R. Math. Acad. Sci. Paris, 347 (2009), 511-516.
doi: 10.1016/j.crma.2009.02.025. |
[8] |
J. Duchi, E. Hazan and Y. Singer,
Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research, 12 (2011), 2121-2159.
|
[9] |
A. Durmus and E. Moulines,
Non-asymptotic convergence analysis for the unadjusted Langevin algorithm, The Annals of Applied Probability, 27 (2017), 1551-1587.
doi: 10.1214/16-AAP1238. |
[10] |
C. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry, and the Natural Sciences, 3rd edn. Springer, New York, 2004.
doi: 10.1007/978-3-662-05389-8. |
[11] |
C. J. Geyer, Markov Chain Monte Carlo Maximum Likelihood, , Computer Science and Statistics, 1991. |
[12] |
X. Glorot, A. Bordes and Y. Bengio, Deep Sparse Rectifier Networks, AISTATS, 2011. |
[13] |
I. J. Goodfellow, O. Vinyals and A. M. Saxe, Qualitatively characterizing neural network optimization problems, ICLR, 2015. |
[14] |
K. He, X. Zhang, S. Ren and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on Imagenet classification, Proceedings of the IEEE international conference on computer vision, (2015), 1026–1034. |
[15] |
D. P. Herzog,
Exponential relaxation of the Nosé-Hoover equation under Brownian heating, Communications in Mathematical Sciences, 16 (2018), 2231-2260.
doi: 10.4310/CMS.2018.v16.n8.a8. |
[16] |
A. Hoerl and R. Kennard,
Ridge regression: Biased estimation for nonorthogonal problems, Technometrics, 12 (1970), 55-67.
|
[17] |
W. Hoover,
Canonical dynamics: Equilibrium phase-space distributions, Phys. Rev. A., 31 (1985), 1695-1697.
doi: 10.1103/PhysRevA.31.1695. |
[18] |
W. R. Huang, Z. Emam, M. Goldblum, L. Fowl, J. K. Terry, F. Huang and T. Goldstein, Understanding generalization through visualizations, arXiv: 1906.03291, (2019). |
[19] |
D. J. Im, M. Tao and K. Branson, An empirical analysis of deep network loss surfaces, CoRR, arXiv: 1612.04010, (2016). |
[20] |
K. Jarrett, K. Kavukcuoglu, M. Ranzato and Y. LeCun, What is the best multi-stage architecture for object recognition?, ICCV, (2009).
doi: 10.1109/ICCV.2009.5459469. |
[21] |
S. Jastrzȩbski, Z, Kenton, D. Arpit, N. Ballas, A. Fischer, Y. Bengio and A. J. Storkey, Three factors influencing minima in SGD, CoRR, arXiv: 1711.04623, (2017). |
[22] |
A. Jones and B. Leimkuhler, Adaptive stochastic methods for sampling driven molecular systems, The Journal of Chemical Physics, 135 (2011), 084125.
doi: 10.1063/1.3626941. |
[23] |
D. King,
Dlib-ml: A machine learning toolkit, Journal of Machine Learning Research, 10 (2009), 1755-1758.
|
[24] |
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, ICLR, (2015). |
[25] |
S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi,
Optimization by simulated annealing, Science, 220 (1983), 671-680.
doi: 10.1126/science.220.4598.671. |
[26] |
H. Kushner and G. G. Yin, Stochastic Approximation and Recursive Algorithms and Applications, Second edition. Applications of Mathematics (New York), 35. Stochastic Modelling and Applied Probability. Springer-Verlag, New York, 2003. |
[27] |
J. Lan, R. Liu, H. Zhou and J. Yosinski, LCA: Loss change allocation for neural network training, preprint, arXiv: 1909.01440, (2019). |
[28] |
B. Leimkuhler and C. Matthews, Molecular Dynamics: With Deterministic and Stochastic Numerical Methods, Interdisciplinary Applied Mathematics, Springer, 2015.
doi: 10.1007/978-3-319-16375-8. |
[29] |
B. Leimkuhler, C. Matthews and G. Stoltz,
The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics, IMA Journal of Numerical Analysis, 36 (2016), 13-79.
doi: 10.1093/imanum/dru056. |
[30] |
B. Leimkuhler, M. Sachs and G. Stoltz, Hypocoercivity properties of adaptive Langevin dynamics, preprint, arXiv: 1908.09363, (2019). |
[31] |
B. Leimkuhler and X. Shang, Adaptive thermostats for noisy gradient systems, SIAM Journal on Scientific Computing, 38 (2016), A712–A736.
doi: 10.1137/15M102318X. |
[32] |
E. Marinari and G. Parisi, Simulated tempering: A new Monte Carlo scheme, Europhysics Letters, 19 (1992).
doi: 10.1209/0295-5075/19/6/002. |
[33] |
J. C. Mattingly, A. M. Stuart and D. J. Higham,
Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise, Stochastic Processes and their Applications, 101 (2002), 185-232.
doi: 10.1016/S0304-4149(02)00150-3. |
[34] |
S. P. Meyn and R. L. Tweedie,
Stability of Markovian processes Ⅱ: Continuous-time processes and sampled chains, Advances in Applied Probability, 25 (1993), 487-517.
doi: 10.2307/1427521. |
[35] |
K.P. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, 2012.
![]() |
[36] |
R. M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996.
doi: 10.1007/978-1-4612-0745-0. |
[37] |
B. Neyshabur, R. Tomioka and N. Srebro, In search of the real inductive bias: On the role of implicit regularization in deep learning, Proceeding of the International Conference on Learning Representations workshop track, arXiv: 1412.6614, (2015). |
[38] |
S. Nosé,
A unified formulation of the constant temperature molecular dynamics methods, The Journal of Chemical Physics, 81 (1984), 511-519.
|
[39] |
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga and A. Lerer, Automatic differentiation in PyTorch, (2017). |
[40] |
E. Pollak, A. Auerbach and P. Talkner,
Observations on rate theory for rugged energy landscapes, Biophysical Journal, 95 (2008), 4258-4265.
|
[41] |
G. O. Roberts and R. L. Tweedie,
Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.
doi: 10.2307/3318418. |
[42] |
M. Sachs, B. Leimkuhler and V. Danos, Langevin dynamics with variable coefficients and nonconservative forces: from stationary states to numerical methods, Entropy, 19 (2017), 647.
doi: 10.3390/e19120647. |
[43] |
L. Sagun, L. Bottou and Y. LeCun, Singularity of the Hessian in deep learning, ICLR, (2017). |
[44] |
L. Sagun, U. Evci, U. Güney, Y. Dauphin and L. Bottou, Empirical analysis of the Hessian of over-parametrized neural networks, ICLR, arXiv: 1706.04454, (2018). |
[45] |
K. T. Schütt, F. Arbabzadah, S. Chmiela, K. R. Müller and A. Tkatchenko, Quantum-chemical insights from deep tensor neural networks, Nature Communications, 8 (2017). |
[46] |
D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan and D. Hassabis,
A general reinforcement learning algorithm that masters Chess, Shogi, and Go through self-play, Science, 362 (2018), 1140-1144.
doi: 10.1126/science.aar6404. |
[47] |
B. Singh, S. De, Y. Zhang, T. Goldstein and G. Taylor, Layer-specific adaptive learning rates for deep networks, ICMLA, arXiv: 1510.04609, (2015).
doi: 10.1109/ICMLA.2015.113. |
[48] |
R. Tibshirani,
Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical Society. Series B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[49] |
T. Tieleman and G. Hinton, Lecture 6.5 - RMSprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, (2012). |
[50] |
M. Welling and Y. W. Teh, Bayesian learning via stochastic gradient Langevin dynamics, Proceedings of the 28th International Conference on Machine Learning, (2011), 681–688. |
[51] |
P. Williams,
Bayesian regularization and pruning using a Laplace prior, Neural Computation, 7 (1995), 117-143.
doi: 10.1162/neco.1995.7.1.117. |
[52] |
A. C. Wilson, R. Roelofs, M. Stern, N. Srebro and B. Recht, The marginal value of adaptive gradient methods in machine learning, arXiv: 1705.08292, (2017). |
[53] |
B. Xu, N. Wang, T. Chen and M. Li, Empirical evaluation of rectified activations in convolutional network., CoRR, arXiv: 1505.00853, (2015). |
[54] |
M. Zeiler, ADADELTA: An adaptive learning rate method, CoRR, arXiv: 1212.5701, (2012). |
[55] |
C. Zhang, S. Bengio, M. Hardt, B. Recht and O. Vinyals, Understanding deep learning requires rethinking generalization, ICLR, arXiv: 1611.03530, (2017). |
[56] |
R. Zwanzig,
Diffusion in a rough potential, Proc. Natl. Acad. Sci. USA, 85 (1988), 2029-2030.
doi: 10.1073/pnas.85.7.2029. |
show all references
References:
[1] |
A. Avati, K. Jung, S. Harman, L. Downing, A. Ng and N. Shah, Improving palliative care with deep learning, 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), (2017).
doi: 10.1109/BIBM.2017.8217669. |
[2] |
A. J. Ballard, R. Das, S. Martiniani, D. Mehta, L. Sagun, J. D. Stevenson and D. J. Wales,
Energy landscapes for machine learning, Phys. Chem. Chem. Phys., 19 (2017), 12585-12603.
doi: 10.1039/C7CP01108C. |
[3] |
N. Brosse, A. Durmus and E. Moulines, The promises and pitfalls of stochastic gradient Langevin dynamics, NIPS, (2018), 8268–8278. |
[4] |
A. Choromanska, M. Henaff, M. Mathieu, G. Arous and Y. LeCun,
The loss surfaces of multilayer networks, Journal of Machine Learning Research, 38 (2015), 192-204.
|
[5] |
Y. Dauphin, R. Pascanum C. Gülçehre, K. Cho, S. Ganguli and Y. Bengio, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, NIPS, (2014). |
[6] |
N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel and H. Neven, Bayesian sampling using stochastic gradient thermostats, NIPS, (2014), 3203–3211. |
[7] |
J. Dolbeault, C. Mouhot and C. Schmeiser,
Hypocoercivity for kinetic equations with linear relaxation terms, C. R. Math. Acad. Sci. Paris, 347 (2009), 511-516.
doi: 10.1016/j.crma.2009.02.025. |
[8] |
J. Duchi, E. Hazan and Y. Singer,
Adaptive subgradient methods for online learning and stochastic optimization, Journal of Machine Learning Research, 12 (2011), 2121-2159.
|
[9] |
A. Durmus and E. Moulines,
Non-asymptotic convergence analysis for the unadjusted Langevin algorithm, The Annals of Applied Probability, 27 (2017), 1551-1587.
doi: 10.1214/16-AAP1238. |
[10] |
C. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry, and the Natural Sciences, 3rd edn. Springer, New York, 2004.
doi: 10.1007/978-3-662-05389-8. |
[11] |
C. J. Geyer, Markov Chain Monte Carlo Maximum Likelihood, , Computer Science and Statistics, 1991. |
[12] |
X. Glorot, A. Bordes and Y. Bengio, Deep Sparse Rectifier Networks, AISTATS, 2011. |
[13] |
I. J. Goodfellow, O. Vinyals and A. M. Saxe, Qualitatively characterizing neural network optimization problems, ICLR, 2015. |
[14] |
K. He, X. Zhang, S. Ren and J. Sun, Delving deep into rectifiers: Surpassing human-level performance on Imagenet classification, Proceedings of the IEEE international conference on computer vision, (2015), 1026–1034. |
[15] |
D. P. Herzog,
Exponential relaxation of the Nosé-Hoover equation under Brownian heating, Communications in Mathematical Sciences, 16 (2018), 2231-2260.
doi: 10.4310/CMS.2018.v16.n8.a8. |
[16] |
A. Hoerl and R. Kennard,
Ridge regression: Biased estimation for nonorthogonal problems, Technometrics, 12 (1970), 55-67.
|
[17] |
W. Hoover,
Canonical dynamics: Equilibrium phase-space distributions, Phys. Rev. A., 31 (1985), 1695-1697.
doi: 10.1103/PhysRevA.31.1695. |
[18] |
W. R. Huang, Z. Emam, M. Goldblum, L. Fowl, J. K. Terry, F. Huang and T. Goldstein, Understanding generalization through visualizations, arXiv: 1906.03291, (2019). |
[19] |
D. J. Im, M. Tao and K. Branson, An empirical analysis of deep network loss surfaces, CoRR, arXiv: 1612.04010, (2016). |
[20] |
K. Jarrett, K. Kavukcuoglu, M. Ranzato and Y. LeCun, What is the best multi-stage architecture for object recognition?, ICCV, (2009).
doi: 10.1109/ICCV.2009.5459469. |
[21] |
S. Jastrzȩbski, Z, Kenton, D. Arpit, N. Ballas, A. Fischer, Y. Bengio and A. J. Storkey, Three factors influencing minima in SGD, CoRR, arXiv: 1711.04623, (2017). |
[22] |
A. Jones and B. Leimkuhler, Adaptive stochastic methods for sampling driven molecular systems, The Journal of Chemical Physics, 135 (2011), 084125.
doi: 10.1063/1.3626941. |
[23] |
D. King,
Dlib-ml: A machine learning toolkit, Journal of Machine Learning Research, 10 (2009), 1755-1758.
|
[24] |
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, ICLR, (2015). |
[25] |
S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi,
Optimization by simulated annealing, Science, 220 (1983), 671-680.
doi: 10.1126/science.220.4598.671. |
[26] |
H. Kushner and G. G. Yin, Stochastic Approximation and Recursive Algorithms and Applications, Second edition. Applications of Mathematics (New York), 35. Stochastic Modelling and Applied Probability. Springer-Verlag, New York, 2003. |
[27] |
J. Lan, R. Liu, H. Zhou and J. Yosinski, LCA: Loss change allocation for neural network training, preprint, arXiv: 1909.01440, (2019). |
[28] |
B. Leimkuhler and C. Matthews, Molecular Dynamics: With Deterministic and Stochastic Numerical Methods, Interdisciplinary Applied Mathematics, Springer, 2015.
doi: 10.1007/978-3-319-16375-8. |
[29] |
B. Leimkuhler, C. Matthews and G. Stoltz,
The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics, IMA Journal of Numerical Analysis, 36 (2016), 13-79.
doi: 10.1093/imanum/dru056. |
[30] |
B. Leimkuhler, M. Sachs and G. Stoltz, Hypocoercivity properties of adaptive Langevin dynamics, preprint, arXiv: 1908.09363, (2019). |
[31] |
B. Leimkuhler and X. Shang, Adaptive thermostats for noisy gradient systems, SIAM Journal on Scientific Computing, 38 (2016), A712–A736.
doi: 10.1137/15M102318X. |
[32] |
E. Marinari and G. Parisi, Simulated tempering: A new Monte Carlo scheme, Europhysics Letters, 19 (1992).
doi: 10.1209/0295-5075/19/6/002. |
[33] |
J. C. Mattingly, A. M. Stuart and D. J. Higham,
Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise, Stochastic Processes and their Applications, 101 (2002), 185-232.
doi: 10.1016/S0304-4149(02)00150-3. |
[34] |
S. P. Meyn and R. L. Tweedie,
Stability of Markovian processes Ⅱ: Continuous-time processes and sampled chains, Advances in Applied Probability, 25 (1993), 487-517.
doi: 10.2307/1427521. |
[35] |
K.P. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, 2012.
![]() |
[36] |
R. M. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996.
doi: 10.1007/978-1-4612-0745-0. |
[37] |
B. Neyshabur, R. Tomioka and N. Srebro, In search of the real inductive bias: On the role of implicit regularization in deep learning, Proceeding of the International Conference on Learning Representations workshop track, arXiv: 1412.6614, (2015). |
[38] |
S. Nosé,
A unified formulation of the constant temperature molecular dynamics methods, The Journal of Chemical Physics, 81 (1984), 511-519.
|
[39] |
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga and A. Lerer, Automatic differentiation in PyTorch, (2017). |
[40] |
E. Pollak, A. Auerbach and P. Talkner,
Observations on rate theory for rugged energy landscapes, Biophysical Journal, 95 (2008), 4258-4265.
|
[41] |
G. O. Roberts and R. L. Tweedie,
Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, 2 (1996), 341-363.
doi: 10.2307/3318418. |
[42] |
M. Sachs, B. Leimkuhler and V. Danos, Langevin dynamics with variable coefficients and nonconservative forces: from stationary states to numerical methods, Entropy, 19 (2017), 647.
doi: 10.3390/e19120647. |
[43] |
L. Sagun, L. Bottou and Y. LeCun, Singularity of the Hessian in deep learning, ICLR, (2017). |
[44] |
L. Sagun, U. Evci, U. Güney, Y. Dauphin and L. Bottou, Empirical analysis of the Hessian of over-parametrized neural networks, ICLR, arXiv: 1706.04454, (2018). |
[45] |
K. T. Schütt, F. Arbabzadah, S. Chmiela, K. R. Müller and A. Tkatchenko, Quantum-chemical insights from deep tensor neural networks, Nature Communications, 8 (2017). |
[46] |
D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan and D. Hassabis,
A general reinforcement learning algorithm that masters Chess, Shogi, and Go through self-play, Science, 362 (2018), 1140-1144.
doi: 10.1126/science.aar6404. |
[47] |
B. Singh, S. De, Y. Zhang, T. Goldstein and G. Taylor, Layer-specific adaptive learning rates for deep networks, ICMLA, arXiv: 1510.04609, (2015).
doi: 10.1109/ICMLA.2015.113. |
[48] |
R. Tibshirani,
Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical Society. Series B, 58 (1996), 267-288.
doi: 10.1111/j.2517-6161.1996.tb02080.x. |
[49] |
T. Tieleman and G. Hinton, Lecture 6.5 - RMSprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, (2012). |
[50] |
M. Welling and Y. W. Teh, Bayesian learning via stochastic gradient Langevin dynamics, Proceedings of the 28th International Conference on Machine Learning, (2011), 681–688. |
[51] |
P. Williams,
Bayesian regularization and pruning using a Laplace prior, Neural Computation, 7 (1995), 117-143.
doi: 10.1162/neco.1995.7.1.117. |
[52] |
A. C. Wilson, R. Roelofs, M. Stern, N. Srebro and B. Recht, The marginal value of adaptive gradient methods in machine learning, arXiv: 1705.08292, (2017). |
[53] |
B. Xu, N. Wang, T. Chen and M. Li, Empirical evaluation of rectified activations in convolutional network., CoRR, arXiv: 1505.00853, (2015). |
[54] |
M. Zeiler, ADADELTA: An adaptive learning rate method, CoRR, arXiv: 1212.5701, (2012). |
[55] |
C. Zhang, S. Bengio, M. Hardt, B. Recht and O. Vinyals, Understanding deep learning requires rethinking generalization, ICLR, arXiv: 1611.03530, (2017). |
[56] |
R. Zwanzig,
Diffusion in a rough potential, Proc. Natl. Acad. Sci. USA, 85 (1988), 2029-2030.
doi: 10.1073/pnas.85.7.2029. |
















[1] |
King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021 |
[2] |
Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001 |
[3] |
Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems and Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077 |
[4] |
Yacine Chitour, Zhenyu Liao, Romain Couillet. A geometric approach of gradient descent algorithms in linear neural networks. Mathematical Control and Related Fields, 2022 doi: 10.3934/mcrf.2022021 |
[5] |
Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure and Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655 |
[6] |
Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial and Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595 |
[7] |
Eduardo Castillo-Castaneda. Neural network training in SCILAB for classifying mango (Mangifera indica) according to maturity level using the RGB color model. STEM Education, 2021, 1 (3) : 186-198. doi: 10.3934/steme.2021014 |
[8] |
Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial and Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565 |
[9] |
Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial and Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645 |
[10] |
Håkon Hoel, Anders Szepessy. Classical Langevin dynamics derived from quantum mechanics. Discrete and Continuous Dynamical Systems - B, 2020, 25 (10) : 4001-4038. doi: 10.3934/dcdsb.2020135 |
[11] |
Ruilin Li, Xin Wang, Hongyuan Zha, Molei Tao. Improving sampling accuracy of stochastic gradient MCMC methods via non-uniform subsampling of gradients. Discrete and Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021157 |
[12] |
Deena Schmidt, Janet Best, Mark S. Blumberg. Random graph and stochastic process contributions to network dynamics. Conference Publications, 2011, 2011 (Special) : 1279-1288. doi: 10.3934/proc.2011.2011.1279 |
[13] |
Fengqiu Liu, Xiaoping Xue. Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels. Journal of Industrial and Management Optimization, 2016, 12 (1) : 285-301. doi: 10.3934/jimo.2016.12.285 |
[14] |
Jui-Pin Tseng. Global asymptotic dynamics of a class of nonlinearly coupled neural networks with delays. Discrete and Continuous Dynamical Systems, 2013, 33 (10) : 4693-4729. doi: 10.3934/dcds.2013.33.4693 |
[15] |
Linhe Zhu, Wenshan Liu. Spatial dynamics and optimization method for a network propagation model in a shifting environment. Discrete and Continuous Dynamical Systems, 2021, 41 (4) : 1843-1874. doi: 10.3934/dcds.2020342 |
[16] |
Mingshang Hu. Stochastic global maximum principle for optimization with recursive utilities. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 1-. doi: 10.1186/s41546-017-0014-7 |
[17] |
Fang Han, Bin Zhen, Ying Du, Yanhong Zheng, Marian Wiercigroch. Global Hopf bifurcation analysis of a six-dimensional FitzHugh-Nagumo neural network with delay by a synchronized scheme. Discrete and Continuous Dynamical Systems - B, 2011, 16 (2) : 457-474. doi: 10.3934/dcdsb.2011.16.457 |
[18] |
Xiaming Chen. Kernel-based online gradient descent using distributed approach. Mathematical Foundations of Computing, 2019, 2 (1) : 1-9. doi: 10.3934/mfc.2019001 |
[19] |
Ting Hu. Kernel-based maximum correntropy criterion with gradient descent method. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4159-4177. doi: 10.3934/cpaa.2020186 |
[20] |
Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial and Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739 |
Impact Factor:
Tools
Metrics
Other articles
by authors
[Back to Top]