• Previous Article
    A stochastic collocation method based on sparse grids for a stochastic Stokes-Darcy model
  • DCDS-S Home
  • This Issue
  • Next Article
    Highly accurate operator factorization methods for the integral fractional Laplacian and its generalization
April  2022, 15(4): 877-892. doi: 10.3934/dcdss.2021088

Analytic continuation of noisy data using Adams Bashforth residual neural network

1. 

New York University, New York, NY, 10012

2. 

Florida State University, Tallahassee, FL, 32304

3. 

Oak Ridge National Laboratory, Oak Ridge, TN, 37830

4. 

University of Tennessee-Knoxville, Knoxville, TN, 37916

* Corresponding author

Received  February 2021 Revised  April 2021 Published  April 2022 Early access  August 2021

We propose a data-driven learning framework for the analytic continuation problem in numerical quantum many-body physics. Designing an accurate and efficient framework for the analytic continuation of imaginary time using computational data is a grand challenge that has hindered meaningful links with experimental data. The standard Maximum Entropy (MaxEnt)-based method is limited by the quality of the computational data and the availability of prior information. Also, the MaxEnt is not able to solve the inversion problem under high level of noise in the data. Here we introduce a novel learning model for the analytic continuation problem using a Adams-Bashforth residual neural network (AB-ResNet). The advantage of this deep learning network is that it is model independent and, therefore, does not require prior information concerning the quantity of interest given by the spectral function. More importantly, the ResNet-based model achieves higher accuracy than MaxEnt for data with higher level of noise. Finally, numerical examples show that the developed AB-ResNet is able to recover the spectral function with accuracy comparable to MaxEnt where the noise level is relatively small.

Citation: Xuping Xie, Feng Bao, Thomas Maier, Clayton Webster. Analytic continuation of noisy data using Adams Bashforth residual neural network. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 877-892. doi: 10.3934/dcdss.2021088
References:
[1]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression methods for inverting fredholm integrals: Formalism and application to analytical continuation, arXiv preprint arXiv: 1612.04895, 2016.

[2]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression method for solving fredholm integral equations arising in the analytic continuation problem of quantum physics, Inverse Problems, 33 (2017), 115007. doi: 10.1088/1361-6420/aa8d93.

[3]

U. M. Ascher and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, volume 61, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611971392.

[4]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.

[5]

K. S. D. Beach, Identifying the maximum entropy method as a special limit of stochastic analytic continuation, arXiv preprint arXiv: cond-mat/0403055, 2004.

[6]

C. BeckE. Weinan and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, Journal of Nonlinear Science, 29 (2019), 1563-1619.  doi: 10.1007/s00332-018-9525-3.

[7]

G. BertainaD. E. Galli and E. Vitali, Statistical and computational intelligence approach to analytic continuation in quantum monte carlo, Advances in Physics: X, 2 (2017), 302-323.  doi: 10.1080/23746149.2017.1288585.

[8]

Y. Cao, H. Zhang, R. Archibald and F. Bao, A backward sde method for uncertainty quantification in deep learning, arXiv preprint arXiv: 2011.14145, 2021.

[9]

B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert and E. Holtham, Reversible architectures for arbitrarily deep residual neural networks, In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.

[10]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, In International Conference on Learning Representations, 2018.

[11]

T. Chen, Y. Rubanova, J. Bettencourt and D. K. Duvenaud, Neural ordinary differential equations, In Advances in Neural Information Processing Systems, 2018, 6571–6583.

[12]

K. Dahm and A. Keller, Learning light transport the reinforced way, In International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 241, Springer, 2018,181–195. doi: 10.1007/978-3-319-91436-7_9.

[13]

F. Bao and T. Maier, Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems, Foundations of Data Science, 2 (2020), 1-17.  doi: 10.3934/fods.2020001.

[14]

W. E and Q. Wang, Exponential convergence of the deep neural network approximation for analytic functions, Sci. China Math., 61 (2018), 1733–1740. arXiv preprint arXiv: 1807.00297, 2018. doi: 10.1007/s11425-018-9387-x.

[15]

R. Fournier, L. Wang, O. V. Yazyev and Q. Wu, Artificial neural network approach to the analytic continuation problem, arXiv preprint arXiv: 1810.00913, 2018. Phys. Rev. Lett., 124 (2020), 056401, 6 pp. doi: 10.1103/PhysRevLett.124.056401.

[16]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum Monte Carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.

[17]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F (Communications, Radar and Signal Processing), 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.

[18]

E. Haber and L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems, 34 (2017), 014004. doi: 10.1088/1361-6420/aa9a90.

[19]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,770–778. doi: 10.1109/CVPR.2016.90.

[20]

R. Hecht-Nielsen, Theory of the backpropagation neural network, In Neural Networks for Perception, Elsevier, 1992, 65–93. doi: 10.1016/B978-0-12-741252-8.50010-8.

[21]

G. HintonL. DengD. YuG. E. DahlA.-R. MohamedN. JaitlyA. SeniorV. VanhouckeP. NguyenT. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine, 29 (2012), 82-97.  doi: 10.1109/MSP.2012.2205597.

[22]

M. Jarrell and J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.

[23]

K. H. JinM. T. McCannE. Froustey and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Transactions on Image Processing, 26 (2017), 4509-4522.  doi: 10.1109/TIP.2017.2713099.

[24]

A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25 (2012), 1097-1105. 

[25]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.

[26]

Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al, Comparison of learning algorithms for handwritten digit recognition, In International Conference on Artificial Neural Networks, volume 60. Perth, Australia, 1995, 53–60.

[27]

R. LevyJ. P. F. LeBlanc and E. Gull, Implementation of the maximum entropy method for analytic continuation, Computer Physics Communications, 215 (2017), 149-155.  doi: 10.1016/j.cpc.2017.01.018.

[28]

H. Li, J. Schwab, S. Antholzer and M. Haltmeier, Nett: Solving inverse problems with deep neural networks, Inverse Problems, 36 (2020), 065005. doi: 10.1088/1361-6420/ab6d57.

[29]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.

[30]

H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, In Advances in Neural Information Processing Systems, 2018, 6169–6178.

[31]

Z. Long, Y, Lu, X. Ma and B. Dong, PDE-net: Learning PDEs from data, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3208–3216.

[32]

Y. Lu, A. Zhong, Q. Li and B. Dong, Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3282–3291.

[33]

C. Ma, J. Wang and Weinan E, Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258, 2018. Commun. Comput. Phys., 25 (2019), 947-962. doi: 10.4208/cicp.oa-2018-0269.

[34]

A. S. MishchenkoN. V. Prokof'evA. Sakamoto and B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.

[35]

J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum monte carlo data, Physical Review E, 95 (2017), 061302. doi: 10.1103/PhysRevE.95.061302.

[36]

E. Pavarini, E. Koch, F. Anders and M. Jarrell, Correlated electrons: From models to materials, Reihe Modeling and Simulation, 2 (2012).

[37]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, JETP Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.

[38]

A. W. Sandvik, Stochastic method for analytic continuation of quantum Monte Carlo data, Physical Review B, 57 (1998), 10287-10290.  doi: 10.1103/PhysRevB.57.10287.

[39]

R. N. SilverJ. E. GubernatisD. S. Sivia and M. Jarrell, Spectral densities of the symmetric Anderson model, Physical Review Letters, 65 (1990), 496-499.  doi: 10.1103/PhysRevLett.65.496.

[40]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, In Advances in Neural Information Processing Systems, 2018,743–753.

[41]

L. Wu, C. Ma and W. E, How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective, In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., 2018, 8289–8298.

[42]

X. Xie, C. Webster and T. Iliescu, Closure learning for nonlinear model reduction using deep residual neural network, phFluids, 5 (2020), 39. doi: 10.3390/fluids5010039.

[43]

X. Xie, G. Zhang and C. G. Webster, Non-intrusive inference reduced order model for fluids using deep multistep neural network, phMathematics, 7 (2019), 757. doi: 10.3390/math7080757.

[44]

H. Yoon, J.-H. Sim and M. J. Han, Analytic continuation via domain knowledge free machine learning, Physical Review B, 98 (2018), 245101. doi: 10.1103/PhysRevB.98.245101.

[45]

G. ZhangB. Eddy Patuwo and M. Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 14 (1998), 35-62.  doi: 10.1016/S0169-2070(97)00044-7.

show all references

References:
[1]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression methods for inverting fredholm integrals: Formalism and application to analytical continuation, arXiv preprint arXiv: 1612.04895, 2016.

[2]

L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression method for solving fredholm integral equations arising in the analytic continuation problem of quantum physics, Inverse Problems, 33 (2017), 115007. doi: 10.1088/1361-6420/aa8d93.

[3]

U. M. Ascher and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, volume 61, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611971392.

[4]

F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.

[5]

K. S. D. Beach, Identifying the maximum entropy method as a special limit of stochastic analytic continuation, arXiv preprint arXiv: cond-mat/0403055, 2004.

[6]

C. BeckE. Weinan and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, Journal of Nonlinear Science, 29 (2019), 1563-1619.  doi: 10.1007/s00332-018-9525-3.

[7]

G. BertainaD. E. Galli and E. Vitali, Statistical and computational intelligence approach to analytic continuation in quantum monte carlo, Advances in Physics: X, 2 (2017), 302-323.  doi: 10.1080/23746149.2017.1288585.

[8]

Y. Cao, H. Zhang, R. Archibald and F. Bao, A backward sde method for uncertainty quantification in deep learning, arXiv preprint arXiv: 2011.14145, 2021.

[9]

B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert and E. Holtham, Reversible architectures for arbitrarily deep residual neural networks, In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.

[10]

B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, In International Conference on Learning Representations, 2018.

[11]

T. Chen, Y. Rubanova, J. Bettencourt and D. K. Duvenaud, Neural ordinary differential equations, In Advances in Neural Information Processing Systems, 2018, 6571–6583.

[12]

K. Dahm and A. Keller, Learning light transport the reinforced way, In International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 241, Springer, 2018,181–195. doi: 10.1007/978-3-319-91436-7_9.

[13]

F. Bao and T. Maier, Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems, Foundations of Data Science, 2 (2020), 1-17.  doi: 10.3934/fods.2020001.

[14]

W. E and Q. Wang, Exponential convergence of the deep neural network approximation for analytic functions, Sci. China Math., 61 (2018), 1733–1740. arXiv preprint arXiv: 1807.00297, 2018. doi: 10.1007/s11425-018-9387-x.

[15]

R. Fournier, L. Wang, O. V. Yazyev and Q. Wu, Artificial neural network approach to the analytic continuation problem, arXiv preprint arXiv: 1810.00913, 2018. Phys. Rev. Lett., 124 (2020), 056401, 6 pp. doi: 10.1103/PhysRevLett.124.056401.

[16]

S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum Monte Carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.

[17]

S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F (Communications, Radar and Signal Processing), 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.

[18]

E. Haber and L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems, 34 (2017), 014004. doi: 10.1088/1361-6420/aa9a90.

[19]

K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,770–778. doi: 10.1109/CVPR.2016.90.

[20]

R. Hecht-Nielsen, Theory of the backpropagation neural network, In Neural Networks for Perception, Elsevier, 1992, 65–93. doi: 10.1016/B978-0-12-741252-8.50010-8.

[21]

G. HintonL. DengD. YuG. E. DahlA.-R. MohamedN. JaitlyA. SeniorV. VanhouckeP. NguyenT. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine, 29 (2012), 82-97.  doi: 10.1109/MSP.2012.2205597.

[22]

M. Jarrell and J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.

[23]

K. H. JinM. T. McCannE. Froustey and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Transactions on Image Processing, 26 (2017), 4509-4522.  doi: 10.1109/TIP.2017.2713099.

[24]

A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25 (2012), 1097-1105. 

[25]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.

[26]

Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al, Comparison of learning algorithms for handwritten digit recognition, In International Conference on Artificial Neural Networks, volume 60. Perth, Australia, 1995, 53–60.

[27]

R. LevyJ. P. F. LeBlanc and E. Gull, Implementation of the maximum entropy method for analytic continuation, Computer Physics Communications, 215 (2017), 149-155.  doi: 10.1016/j.cpc.2017.01.018.

[28]

H. Li, J. Schwab, S. Antholzer and M. Haltmeier, Nett: Solving inverse problems with deep neural networks, Inverse Problems, 36 (2020), 065005. doi: 10.1088/1361-6420/ab6d57.

[29]

Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.

[30]

H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, In Advances in Neural Information Processing Systems, 2018, 6169–6178.

[31]

Z. Long, Y, Lu, X. Ma and B. Dong, PDE-net: Learning PDEs from data, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3208–3216.

[32]

Y. Lu, A. Zhong, Q. Li and B. Dong, Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3282–3291.

[33]

C. Ma, J. Wang and Weinan E, Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258, 2018. Commun. Comput. Phys., 25 (2019), 947-962. doi: 10.4208/cicp.oa-2018-0269.

[34]

A. S. MishchenkoN. V. Prokof'evA. Sakamoto and B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.

[35]

J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum monte carlo data, Physical Review E, 95 (2017), 061302. doi: 10.1103/PhysRevE.95.061302.

[36]

E. Pavarini, E. Koch, F. Anders and M. Jarrell, Correlated electrons: From models to materials, Reihe Modeling and Simulation, 2 (2012).

[37]

N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, JETP Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.

[38]

A. W. Sandvik, Stochastic method for analytic continuation of quantum Monte Carlo data, Physical Review B, 57 (1998), 10287-10290.  doi: 10.1103/PhysRevB.57.10287.

[39]

R. N. SilverJ. E. GubernatisD. S. Sivia and M. Jarrell, Spectral densities of the symmetric Anderson model, Physical Review Letters, 65 (1990), 496-499.  doi: 10.1103/PhysRevLett.65.496.

[40]

B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, In Advances in Neural Information Processing Systems, 2018,743–753.

[41]

L. Wu, C. Ma and W. E, How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective, In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., 2018, 8289–8298.

[42]

X. Xie, C. Webster and T. Iliescu, Closure learning for nonlinear model reduction using deep residual neural network, phFluids, 5 (2020), 39. doi: 10.3390/fluids5010039.

[43]

X. Xie, G. Zhang and C. G. Webster, Non-intrusive inference reduced order model for fluids using deep multistep neural network, phMathematics, 7 (2019), 757. doi: 10.3390/math7080757.

[44]

H. Yoon, J.-H. Sim and M. J. Han, Analytic continuation via domain knowledge free machine learning, Physical Review B, 98 (2018), 245101. doi: 10.1103/PhysRevB.98.245101.

[45]

G. ZhangB. Eddy Patuwo and M. Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 14 (1998), 35-62.  doi: 10.1016/S0169-2070(97)00044-7.

Figure 1.  Illustration of data-driven learning framework for analytic continuation
Figure 2.  Single hidden layer neural network structure
Figure 3.  Residual neural network block
Figure 4.  Multistep neural network architecture
Figure 5.  One data sample from the training set $ G(\tau) $ (top left), Legendre representation $ G_l $ (top right), and target spectral density $ A(\omega) $ (bottom)
Figure 6.  The training performance from AB1-ResNet, AB2-ResNet, and AB3-ResNet structure with data noise $ 10^{-2} $
Figure 7.  Three different spectral density function $ A(\omega) $ generated from AB3-ResNet and Maxent (dark line). The left column represents results from dataset with noise level $ 10^{-2} $, the right column shows results obtained from the dataset under noise level $ 10^{-3} $
Figure 8.  The comparison of predicted spectral function between different AB-ResNet
[1]

Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001

[2]

Weishi Yin, Jiawei Ge, Pinchao Meng, Fuheng Qu. A neural network method for the inverse scattering problem of impenetrable cavities. Electronic Research Archive, 2020, 28 (2) : 1123-1142. doi: 10.3934/era.2020062

[3]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control and Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[4]

David W. Pravica, Michael J. Spurr. Analytic continuation into the future. Conference Publications, 2003, 2003 (Special) : 709-716. doi: 10.3934/proc.2003.2003.709

[5]

Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks and Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011

[6]

Hui-Qiang Ma, Nan-Jing Huang. Neural network smoothing approximation method for stochastic variational inequality problems. Journal of Industrial and Management Optimization, 2015, 11 (2) : 645-660. doi: 10.3934/jimo.2015.11.645

[7]

Hongtruong Pham, Xiwen Lu. The inverse parallel machine scheduling problem with minimum total completion time. Journal of Industrial and Management Optimization, 2014, 10 (2) : 613-620. doi: 10.3934/jimo.2014.10.613

[8]

Cheng-Dar Liou. Optimization analysis of the machine repair problem with multiple vacations and working breakdowns. Journal of Industrial and Management Optimization, 2015, 11 (1) : 83-104. doi: 10.3934/jimo.2015.11.83

[9]

Fengqiu Liu, Xiaoping Xue. Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels. Journal of Industrial and Management Optimization, 2016, 12 (1) : 285-301. doi: 10.3934/jimo.2016.12.285

[10]

Lucie Baudouin, Emmanuelle Crépeau, Julie Valein. Global Carleman estimate on a network for the wave equation and application to an inverse problem. Mathematical Control and Related Fields, 2011, 1 (3) : 307-330. doi: 10.3934/mcrf.2011.1.307

[11]

Xiaoli Feng, Meixia Zhao, Peijun Li, Xu Wang. An inverse source problem for the stochastic wave equation. Inverse Problems and Imaging, 2022, 16 (2) : 397-415. doi: 10.3934/ipi.2021055

[12]

Yi-Kuei Lin, Cheng-Ta Yeh. Reliability optimization of component assignment problem for a multistate network in terms of minimal cuts. Journal of Industrial and Management Optimization, 2011, 7 (1) : 211-227. doi: 10.3934/jimo.2011.7.211

[13]

Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011

[14]

Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics and Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008

[15]

Sriram Nagaraj. Optimization and learning with nonlocal calculus. Foundations of Data Science, 2022  doi: 10.3934/fods.2022009

[16]

Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Stability of the dynamics of an asymmetric neural network. Communications on Pure and Applied Analysis, 2009, 8 (2) : 655-671. doi: 10.3934/cpaa.2009.8.655

[17]

Li-Fang Dai, Mao-Lin Liang, Wei-Yuan Ma. Optimization problems on the rank of the solution to left and right inverse eigenvalue problem. Journal of Industrial and Management Optimization, 2015, 11 (1) : 171-183. doi: 10.3934/jimo.2015.11.171

[18]

Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete and Continuous Dynamical Systems - S, 2022, 15 (1) : 1-21. doi: 10.3934/dcdss.2021006

[19]

Yuk L. Yung, Cameron Taketa, Ross Cheung, Run-Lie Shia. Infinite sum of the product of exponential and logarithmic functions, its analytic continuation, and application. Discrete and Continuous Dynamical Systems - B, 2010, 13 (1) : 229-248. doi: 10.3934/dcdsb.2010.13.229

[20]

Jan Boman. Unique continuation of microlocally analytic distributions and injectivity theorems for the ray transform. Inverse Problems and Imaging, 2010, 4 (4) : 619-630. doi: 10.3934/ipi.2010.4.619

2020 Impact Factor: 2.425

Metrics

  • PDF downloads (334)
  • HTML views (398)
  • Cited by (0)

Other articles
by authors

[Back to Top]