\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Analytic continuation of noisy data using Adams Bashforth residual neural network

  • * Corresponding author

    * Corresponding author 
Abstract Full Text(HTML) Figure(8) Related Papers Cited by
  • We propose a data-driven learning framework for the analytic continuation problem in numerical quantum many-body physics. Designing an accurate and efficient framework for the analytic continuation of imaginary time using computational data is a grand challenge that has hindered meaningful links with experimental data. The standard Maximum Entropy (MaxEnt)-based method is limited by the quality of the computational data and the availability of prior information. Also, the MaxEnt is not able to solve the inversion problem under high level of noise in the data. Here we introduce a novel learning model for the analytic continuation problem using a Adams-Bashforth residual neural network (AB-ResNet). The advantage of this deep learning network is that it is model independent and, therefore, does not require prior information concerning the quantity of interest given by the spectral function. More importantly, the ResNet-based model achieves higher accuracy than MaxEnt for data with higher level of noise. Finally, numerical examples show that the developed AB-ResNet is able to recover the spectral function with accuracy comparable to MaxEnt where the noise level is relatively small.

    Mathematics Subject Classification: Primary: 45B05; Secondary: 32W50, 49N30.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Illustration of data-driven learning framework for analytic continuation

    Figure 2.  Single hidden layer neural network structure

    Figure 3.  Residual neural network block

    Figure 4.  Multistep neural network architecture

    Figure 5.  One data sample from the training set $ G(\tau) $ (top left), Legendre representation $ G_l $ (top right), and target spectral density $ A(\omega) $ (bottom)

    Figure 6.  The training performance from AB1-ResNet, AB2-ResNet, and AB3-ResNet structure with data noise $ 10^{-2} $

    Figure 7.  Three different spectral density function $ A(\omega) $ generated from AB3-ResNet and Maxent (dark line). The left column represents results from dataset with noise level $ 10^{-2} $, the right column shows results obtained from the dataset under noise level $ 10^{-3} $

    Figure 8.  The comparison of predicted spectral function between different AB-ResNet

  • [1] L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression methods for inverting fredholm integrals: Formalism and application to analytical continuation, arXiv preprint arXiv: 1612.04895, 2016.
    [2] L.-F. Arsenault, R. Neuberg, L. A. Hannah and A. J. Millis, Projected regression method for solving fredholm integral equations arising in the analytic continuation problem of quantum physics, Inverse Problems, 33 (2017), 115007. doi: 10.1088/1361-6420/aa8d93.
    [3] U. M. Ascher and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, volume 61, SIAM, Philadelphia, PA, 1998. doi: 10.1137/1.9781611971392.
    [4] F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola and T. A. Maier, Fast and efficient stochastic optimization for analytic continuation, Physical Review B, 94 (2016), 125149. doi: 10.1103/PhysRevB.94.125149.
    [5] K. S. D. Beach, Identifying the maximum entropy method as a special limit of stochastic analytic continuation, arXiv preprint arXiv: cond-mat/0403055, 2004.
    [6] C. BeckE. Weinan and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, Journal of Nonlinear Science, 29 (2019), 1563-1619.  doi: 10.1007/s00332-018-9525-3.
    [7] G. BertainaD. E. Galli and E. Vitali, Statistical and computational intelligence approach to analytic continuation in quantum monte carlo, Advances in Physics: X, 2 (2017), 302-323.  doi: 10.1080/23746149.2017.1288585.
    [8] Y. Cao, H. Zhang, R. Archibald and F. Bao, A backward sde method for uncertainty quantification in deep learning, arXiv preprint arXiv: 2011.14145, 2021.
    [9] B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert and E. Holtham, Reversible architectures for arbitrarily deep residual neural networks, In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
    [10] B. Chang, L. Meng, E. Haber, F. Tung and D. Begert, Multi-level residual networks from dynamical systems view, In International Conference on Learning Representations, 2018.
    [11] T. Chen, Y. Rubanova, J. Bettencourt and D. K. Duvenaud, Neural ordinary differential equations, In Advances in Neural Information Processing Systems, 2018, 6571–6583.
    [12] K. Dahm and A. Keller, Learning light transport the reinforced way, In International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, 241, Springer, 2018,181–195. doi: 10.1007/978-3-319-91436-7_9.
    [13] F. Bao and T. Maier, Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems, Foundations of Data Science, 2 (2020), 1-17.  doi: 10.3934/fods.2020001.
    [14] W. E and Q. Wang, Exponential convergence of the deep neural network approximation for analytic functions, Sci. China Math., 61 (2018), 1733–1740. arXiv preprint arXiv: 1807.00297, 2018. doi: 10.1007/s11425-018-9387-x.
    [15] R. Fournier, L. Wang, O. V. Yazyev and Q. Wu, Artificial neural network approach to the analytic continuation problem, arXiv preprint arXiv: 1810.00913, 2018. Phys. Rev. Lett., 124 (2020), 056401, 6 pp. doi: 10.1103/PhysRevLett.124.056401.
    [16] S. Fuchs, T. Pruschke and M. Jarrell, Analytic continuation of quantum Monte Carlo data by stochastic analytical inference, Physical Review E, 81 (2010), 056701. doi: 10.1103/PhysRevE.81.056701.
    [17] S. F. Gull and J. Skilling, Maximum entropy method in image processing, IEE Proceedings F (Communications, Radar and Signal Processing), 131 (1984), 646-659.  doi: 10.1049/ip-f-1.1984.0099.
    [18] E. Haber and L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems, 34 (2017), 014004. doi: 10.1088/1361-6420/aa9a90.
    [19] K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016,770–778. doi: 10.1109/CVPR.2016.90.
    [20] R. Hecht-Nielsen, Theory of the backpropagation neural network, In Neural Networks for Perception, Elsevier, 1992, 65–93. doi: 10.1016/B978-0-12-741252-8.50010-8.
    [21] G. HintonL. DengD. YuG. E. DahlA.-R. MohamedN. JaitlyA. SeniorV. VanhouckeP. NguyenT. N. Sainath and B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine, 29 (2012), 82-97.  doi: 10.1109/MSP.2012.2205597.
    [22] M. Jarrell and J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Physics Reports, 269 (1996), 133-195.  doi: 10.1016/0370-1573(95)00074-7.
    [23] K. H. JinM. T. McCannE. Froustey and M. Unser, Deep convolutional neural network for inverse problems in imaging, IEEE Transactions on Image Processing, 26 (2017), 4509-4522.  doi: 10.1109/TIP.2017.2713099.
    [24] A. KrizhevskyI. Sutskever and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25 (2012), 1097-1105. 
    [25] Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.  doi: 10.1038/nature14539.
    [26] Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A. Muller, E. Sackinger, et al, Comparison of learning algorithms for handwritten digit recognition, In International Conference on Artificial Neural Networks, volume 60. Perth, Australia, 1995, 53–60.
    [27] R. LevyJ. P. F. LeBlanc and E. Gull, Implementation of the maximum entropy method for analytic continuation, Computer Physics Communications, 215 (2017), 149-155.  doi: 10.1016/j.cpc.2017.01.018.
    [28] H. Li, J. Schwab, S. Antholzer and M. Haltmeier, Nett: Solving inverse problems with deep neural networks, Inverse Problems, 36 (2020), 065005. doi: 10.1088/1361-6420/ab6d57.
    [29] Q. Li, C. Tai and W. E, Stochastic modified equations and dynamics of stochastic gradient algorithms I: Mathematical foundations, Journal of Machine Learning Research, 20 (2019), Paper No. 40, 47 pp.
    [30] H. Lin and S. Jegelka, Resnet with one-neuron hidden layers is a universal approximator, In Advances in Neural Information Processing Systems, 2018, 6169–6178.
    [31] Z. Long, Y, Lu, X. Ma and B. Dong, PDE-net: Learning PDEs from data, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3208–3216.
    [32] Y. Lu, A. Zhong, Q. Li and B. Dong, Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations, In Proceedings of the 35th International Conference on Machine Learning, 2018, 3282–3291.
    [33] C. Ma, J. Wang and Weinan E, Model reduction with memory and the machine learning of dynamical systems, arXiv preprint arXiv:1808.04258, 2018. Commun. Comput. Phys., 25 (2019), 947-962. doi: 10.4208/cicp.oa-2018-0269.
    [34] A. S. MishchenkoN. V. Prokof'evA. Sakamoto and B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Physical Review B, 62 (2000), 6317-6336.  doi: 10.1103/PhysRevB.62.6317.
    [35] J. Otsuki, M. Ohzeki, H. Shinaoka and K. Yoshimi, Sparse modeling approach to analytical continuation of imaginary-time quantum monte carlo data, Physical Review E, 95 (2017), 061302. doi: 10.1103/PhysRevE.95.061302.
    [36] E. Pavarini, E. Koch, F. Anders and M. Jarrell, Correlated electrons: From models to materials, Reihe Modeling and Simulation, 2 (2012).
    [37] N. V. Prokof'ev and B. V. Svistunov, Spectral analysis by the method of consistent constraints, JETP Lett., 97 (2013), 649-653.  doi: 10.1134/S002136401311009X.
    [38] A. W. Sandvik, Stochastic method for analytic continuation of quantum Monte Carlo data, Physical Review B, 57 (1998), 10287-10290.  doi: 10.1103/PhysRevB.57.10287.
    [39] R. N. SilverJ. E. GubernatisD. S. Sivia and M. Jarrell, Spectral densities of the symmetric Anderson model, Physical Review Letters, 65 (1990), 496-499.  doi: 10.1103/PhysRevLett.65.496.
    [40] B. Wang, X. Luo, Z. Li, W. Zhu, Z. Shi and S. Osher, Deep neural nets with interpolating function as output activation, In Advances in Neural Information Processing Systems, 2018,743–753.
    [41] L. Wu, C. Ma and W. E, How SGD selects the global minima in over-parameterized learning: A dynamical stability perspective, In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., 2018, 8289–8298.
    [42] X. Xie, C. Webster and T. Iliescu, Closure learning for nonlinear model reduction using deep residual neural network, phFluids, 5 (2020), 39. doi: 10.3390/fluids5010039.
    [43] X. Xie, G. Zhang and C. G. Webster, Non-intrusive inference reduced order model for fluids using deep multistep neural network, phMathematics, 7 (2019), 757. doi: 10.3390/math7080757.
    [44] H. Yoon, J.-H. Sim and M. J. Han, Analytic continuation via domain knowledge free machine learning, Physical Review B, 98 (2018), 245101. doi: 10.1103/PhysRevB.98.245101.
    [45] G. ZhangB. Eddy Patuwo and M. Y. Hu, Forecasting with artificial neural networks: The state of the art, International Journal of Forecasting, 14 (1998), 35-62.  doi: 10.1016/S0169-2070(97)00044-7.
  • 加载中

Figures(8)

SHARE

Article Metrics

HTML views(1601) PDF downloads(393) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return