
- Previous Article
- FoDS Home
- This Issue
-
Next Article
Spectral clustering revisited: Information hidden in the Fiedler vector
A Bayesian multiscale deep learning framework for flows in random media
Scientific Computing and Artificial Intelligence (SCAI) Laboratory, 311I Cushing Hall, University of Notre Dame, Notre Dame, IN 46556, USA |
Fine-scale simulation of complex systems governed by multiscale partial differential equations (PDEs) is computationally expensive and various multiscale methods have been developed for addressing such problems. In addition, it is challenging to develop accurate surrogate and uncertainty quantification models for high-dimensional problems governed by stochastic multiscale PDEs using limited training data. In this work to address these challenges, we introduce a novel hybrid deep-learning and multiscale approach for stochastic multiscale PDEs with limited training data. For demonstration purposes, we focus on a porous media flow problem. We use an image-to-image supervised deep learning model to learn the mapping between the input permeability field and the multiscale basis functions. We introduce a Bayesian approach to this hybrid framework to allow us to perform uncertainty quantification and propagation tasks. The performance of this hybrid approach is evaluated with varying intrinsic dimensionality of the permeability field. Numerical results indicate that the hybrid network can efficiently predict well for high-dimensional inputs.
References:
[1] |
J. E. Aarnes, V. Kippe, K.-A. Lie and A. B. Rustad, Modelling of multiscale structures in flow simulations for petroleum reservoirs, Geometric Modelling, Numerical Simulation, and Optimization, (2007), 307–360.
doi: 10.1007/978-3-540-68783-2_10. |
[2] |
J. E. Aarnes and Y. Efendiev, Mixed multiscale finite element methods for stochastic porous media flows, SIAM Journal on Scientific Computing, 30 (2008), 2319–2339.
doi: 10.1137/07070108X. |
[3] |
M. S. Alnaes, et al., The FEniCS project version 1.5, Archive of Numerical Software, 3 (2015).
doi: 10.11588/ans.2015.100.20553. |
[4] |
K. Aziz and A. Settari, Petroleum reservoir simulation, Blitzprint Ltd, (2002). |
[5] |
I. Bilionis, N. Zabaras, B. A. Konomi and G. Lin,
Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification, Journal of Computational Physics, 521 (2013), 212-239.
doi: 10.1016/j.jcp.2013.01.011. |
[6] |
C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra, Weight uncertainty in neural networks, preprint, arXiv: 1505.05424. |
[7] |
S. Chan and A. H.Elsheikh,
A machine learning approach for efficient uncertainty quantification using multiscale methods, Journal of Computational Physics, 354 (2018), 493-511.
doi: 10.1016/j.jcp.2017.10.034. |
[8] |
E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, 3$^{rd}$ edition, Elsevier, 2005. |
[9] |
R. W. Freund, G. H. Golub and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, 1 (1992), 57–100.
doi: 10.1.1.55.5646. |
[10] |
Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, preprint, arXiv: 1506.02142. |
[11] |
N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 403 (2020), 109056.
doi: 10.1016/j.jcp.2019.109056. |
[12] |
N. Geneva and N. Zabaras,
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 394 (2019), 125-147.
doi: 10.1016/j.jcp.2019.01.021. |
[13] |
X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (2011), 315–323. Available from: http://proceedings.mlr.press/v15/glorot11a.html. |
[14] |
I. Goodfellow, Y. Bengio and A. Courville, Deep learning, MIT Press, 2016. Available from: http://www.deeplearningbook.org |
[15] |
K. He, X. Zhang, R. Shaoqing and J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. |
[16] |
J. Hernández-Lobato and R. Adams, Probabilistic backpropagation for scalable learning of Bayesian neural networks, preprint, arXiv: 1502.05336. |
[17] |
G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017).
doi: 10.1109/cvpr.2017.243. |
[18] |
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. |
[19] |
S. Jégou, M. Drozdzal, D. Vazquez, A. Romero and Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), 11–19.
doi: 10.1.1.55.5646. |
[20] |
P. Jenny, S. H. Lee and H. A. Tchelepi,
Multi-scale finite-volume method for elliptic problems in subsurface flow simulation, Journal of Computational Physics, 187 (2003), 47-67.
doi: 10.1016/s0021-9991(03)00075-5. |
[21] |
D. Kingma and J.Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. |
[22] |
D. P. Kingma, T. Salimans and M. Welling, Variational dropout and the local reparameterization trick, preprint, arXiv: 1506.02557. |
[23] |
A. Krizhevsky, I. Sutskever and G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, (2012), 1097–1105.
doi: 10.1145/3065386. |
[24] |
E. Laloy, R. Hérault, D. Jacques and N. Linde,
Training-image based geostatistical inversion using a spatial generative adversarial neural network, Water Resources Research, 54 (2018), 381-406.
doi: 10.1016/j.jcp.2019.01.021. |
[25] |
Y. LeCun, Y. Bengio and G. Hinton,
Deep learning, Nature, 521 (2015), 436-444.
doi: 10.1038/nature14539. |
[26] |
Q. Liu and D. Wang, Stein variational gradient descent: A general purpose Bayesian inference algorithm, preprint, arXiv: 1608.04471. |
[27] |
L. V. D. Maaten, E. Postma and J. Van den Herik, Dimensionality reduction: A comparative review, Journal of Machine Learning Research, 10 (2009), 66–71. Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.5472. |
[28] |
S. Mo, N. Zabaras, X. Shi and J. Wu, Integration of adversarial autoencoders with residual dense convolutional networks for estimation of non-Gaussian hydraulic conductivities, Water Resources Research, 56 (2020).
doi: 10.1029/2019WR026082. |
[29] |
S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu,
Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2018), 703-728.
doi: 10.1029/2018wr023528. |
[30] |
O. Møyner and K. Lie,
A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids, Journal of Computational Physics, 304 (2016), 46-71.
doi: 10.1016/j.jcp.2015.10.010. |
[31] |
A. Paszke, et. al., Automatic differentiation in pytorch, Neural Information Processing Systems, (2017). Available from: https://openreview.net/forum?id=BJJsrmfCZ. |
[32] |
A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, preprint, arXiv: abs/1511.06434. |
[33] |
O. Ronneberger, P. Fischer, B. Philipp and T. Brox, U-net: Convolutional networks for biomedical image segmentation, preprint, arXiv: 1505.04597. |
[34] |
S. Shah, O. Møyner, M. Tene, K. Lie and H. Hajibeygi,
The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB), Journal of Computational Physics, 318 (2016), 36-57.
doi: 10.1016/j.jcp.2016.05.001. |
[35] |
SINTEF MRST project web page, (2015), 66–71. Available from: http://www.sintef.no/Projectweb/MRST/. |
[36] |
N. Thuerey, K. Weissenow, H. Mehrotra, N. Mainali, L. Prantl and X. Hu, A study of deep learning methods for Reynolds-averaged Navier-Stokes simulations, preprint, arXiv: 1810.08217. |
[37] |
R. K. Tripathy and I. Bilionis,
Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification, Journal of Computational Physics, 375 (2018), 565-588.
doi: 10.1016/j.jcp.2018.08.036. |
[38] |
D. Vernon, Machine vision-Automated visual inspection and robot vision, NASA STI/Recon Technical Report A, 92 (1991). |
[39] |
J. Wan and N. Zabaras,
A probabilistic graphical model approach to stochastic multiscale partial differential equations, Journal of Computational Physics, 250 (2013), 477-510.
doi: 10.1016/j.jcp.2013.05.016. |
[40] |
M. Wang, S. W. Cheung, E. T. Chung, Y. Efendiev, W. T. Leung and Y. Wang, Prediction of discretization of GMsFEM using deep learning, Mathematics, 7 (2019), 412.
doi: 10.3390/math7050412. |
[41] |
Y. Wang, S. Wun, E. T. Chung, Y. Efendiev and M. Wang, Deep multiscale model learning, preprint, arXiv: 1806.04830. |
[42] |
M. A. Zahangir, T. M. Tarek, C. Yakopcic, S. Westberg, P. Sidike, M. N. Shamima, B. C. Van Esesn, A. A. S. Awwal and V. K. Asari, The history began from AlexNET: A comprehensive survey on deep learning approaches, preprint, arXiv: 1803.01164. |
[43] |
M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, preprint, arXiv: 1311.2901. |
[44] |
J. Zhang, S. W. Cheung, Y. Efendiev, E. Gildin and E. T. Chung, Deep model reduction-model learning for reservoir simulation, Society of Petroleum Engineers, (2019).
doi: 10.2118/193912-ms. |
[45] |
Y. Zhu and N. Zabaras,
Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.
doi: 10.1016/j.jcp.2018.04.018. |
[46] |
Y. Zhu, N. Zabaras, P. Koutsourelakis and P. Perdikaris,
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.
doi: 10.1016/j.jcp.2019.05.024. |
show all references
References:
[1] |
J. E. Aarnes, V. Kippe, K.-A. Lie and A. B. Rustad, Modelling of multiscale structures in flow simulations for petroleum reservoirs, Geometric Modelling, Numerical Simulation, and Optimization, (2007), 307–360.
doi: 10.1007/978-3-540-68783-2_10. |
[2] |
J. E. Aarnes and Y. Efendiev, Mixed multiscale finite element methods for stochastic porous media flows, SIAM Journal on Scientific Computing, 30 (2008), 2319–2339.
doi: 10.1137/07070108X. |
[3] |
M. S. Alnaes, et al., The FEniCS project version 1.5, Archive of Numerical Software, 3 (2015).
doi: 10.11588/ans.2015.100.20553. |
[4] |
K. Aziz and A. Settari, Petroleum reservoir simulation, Blitzprint Ltd, (2002). |
[5] |
I. Bilionis, N. Zabaras, B. A. Konomi and G. Lin,
Multi-output separable Gaussian process: Towards an efficient, fully Bayesian paradigm for uncertainty quantification, Journal of Computational Physics, 521 (2013), 212-239.
doi: 10.1016/j.jcp.2013.01.011. |
[6] |
C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra, Weight uncertainty in neural networks, preprint, arXiv: 1505.05424. |
[7] |
S. Chan and A. H.Elsheikh,
A machine learning approach for efficient uncertainty quantification using multiscale methods, Journal of Computational Physics, 354 (2018), 493-511.
doi: 10.1016/j.jcp.2017.10.034. |
[8] |
E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, 3$^{rd}$ edition, Elsevier, 2005. |
[9] |
R. W. Freund, G. H. Golub and N. M. Nachtigal, Iterative solution of linear systems, Acta Numerica, 1 (1992), 57–100.
doi: 10.1.1.55.5646. |
[10] |
Y. Gal and Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, preprint, arXiv: 1506.02142. |
[11] |
N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 403 (2020), 109056.
doi: 10.1016/j.jcp.2019.109056. |
[12] |
N. Geneva and N. Zabaras,
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 394 (2019), 125-147.
doi: 10.1016/j.jcp.2019.01.021. |
[13] |
X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, (2011), 315–323. Available from: http://proceedings.mlr.press/v15/glorot11a.html. |
[14] |
I. Goodfellow, Y. Bengio and A. Courville, Deep learning, MIT Press, 2016. Available from: http://www.deeplearningbook.org |
[15] |
K. He, X. Zhang, R. Shaoqing and J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. |
[16] |
J. Hernández-Lobato and R. Adams, Probabilistic backpropagation for scalable learning of Bayesian neural networks, preprint, arXiv: 1502.05336. |
[17] |
G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017).
doi: 10.1109/cvpr.2017.243. |
[18] |
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167. |
[19] |
S. Jégou, M. Drozdzal, D. Vazquez, A. Romero and Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, (2017), 11–19.
doi: 10.1.1.55.5646. |
[20] |
P. Jenny, S. H. Lee and H. A. Tchelepi,
Multi-scale finite-volume method for elliptic problems in subsurface flow simulation, Journal of Computational Physics, 187 (2003), 47-67.
doi: 10.1016/s0021-9991(03)00075-5. |
[21] |
D. Kingma and J.Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. |
[22] |
D. P. Kingma, T. Salimans and M. Welling, Variational dropout and the local reparameterization trick, preprint, arXiv: 1506.02557. |
[23] |
A. Krizhevsky, I. Sutskever and G. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, (2012), 1097–1105.
doi: 10.1145/3065386. |
[24] |
E. Laloy, R. Hérault, D. Jacques and N. Linde,
Training-image based geostatistical inversion using a spatial generative adversarial neural network, Water Resources Research, 54 (2018), 381-406.
doi: 10.1016/j.jcp.2019.01.021. |
[25] |
Y. LeCun, Y. Bengio and G. Hinton,
Deep learning, Nature, 521 (2015), 436-444.
doi: 10.1038/nature14539. |
[26] |
Q. Liu and D. Wang, Stein variational gradient descent: A general purpose Bayesian inference algorithm, preprint, arXiv: 1608.04471. |
[27] |
L. V. D. Maaten, E. Postma and J. Van den Herik, Dimensionality reduction: A comparative review, Journal of Machine Learning Research, 10 (2009), 66–71. Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.112.5472. |
[28] |
S. Mo, N. Zabaras, X. Shi and J. Wu, Integration of adversarial autoencoders with residual dense convolutional networks for estimation of non-Gaussian hydraulic conductivities, Water Resources Research, 56 (2020).
doi: 10.1029/2019WR026082. |
[29] |
S. Mo, Y. Zhu, N. Zabaras, X. Shi and J. Wu,
Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2018), 703-728.
doi: 10.1029/2018wr023528. |
[30] |
O. Møyner and K. Lie,
A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids, Journal of Computational Physics, 304 (2016), 46-71.
doi: 10.1016/j.jcp.2015.10.010. |
[31] |
A. Paszke, et. al., Automatic differentiation in pytorch, Neural Information Processing Systems, (2017). Available from: https://openreview.net/forum?id=BJJsrmfCZ. |
[32] |
A. Radford, L. Metz and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, preprint, arXiv: abs/1511.06434. |
[33] |
O. Ronneberger, P. Fischer, B. Philipp and T. Brox, U-net: Convolutional networks for biomedical image segmentation, preprint, arXiv: 1505.04597. |
[34] |
S. Shah, O. Møyner, M. Tene, K. Lie and H. Hajibeygi,
The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB), Journal of Computational Physics, 318 (2016), 36-57.
doi: 10.1016/j.jcp.2016.05.001. |
[35] |
SINTEF MRST project web page, (2015), 66–71. Available from: http://www.sintef.no/Projectweb/MRST/. |
[36] |
N. Thuerey, K. Weissenow, H. Mehrotra, N. Mainali, L. Prantl and X. Hu, A study of deep learning methods for Reynolds-averaged Navier-Stokes simulations, preprint, arXiv: 1810.08217. |
[37] |
R. K. Tripathy and I. Bilionis,
Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification, Journal of Computational Physics, 375 (2018), 565-588.
doi: 10.1016/j.jcp.2018.08.036. |
[38] |
D. Vernon, Machine vision-Automated visual inspection and robot vision, NASA STI/Recon Technical Report A, 92 (1991). |
[39] |
J. Wan and N. Zabaras,
A probabilistic graphical model approach to stochastic multiscale partial differential equations, Journal of Computational Physics, 250 (2013), 477-510.
doi: 10.1016/j.jcp.2013.05.016. |
[40] |
M. Wang, S. W. Cheung, E. T. Chung, Y. Efendiev, W. T. Leung and Y. Wang, Prediction of discretization of GMsFEM using deep learning, Mathematics, 7 (2019), 412.
doi: 10.3390/math7050412. |
[41] |
Y. Wang, S. Wun, E. T. Chung, Y. Efendiev and M. Wang, Deep multiscale model learning, preprint, arXiv: 1806.04830. |
[42] |
M. A. Zahangir, T. M. Tarek, C. Yakopcic, S. Westberg, P. Sidike, M. N. Shamima, B. C. Van Esesn, A. A. S. Awwal and V. K. Asari, The history began from AlexNET: A comprehensive survey on deep learning approaches, preprint, arXiv: 1803.01164. |
[43] |
M. D. Zeiler and R. Fergus, Visualizing and understanding convolutional networks, preprint, arXiv: 1311.2901. |
[44] |
J. Zhang, S. W. Cheung, Y. Efendiev, E. Gildin and E. T. Chung, Deep model reduction-model learning for reservoir simulation, Society of Petroleum Engineers, (2019).
doi: 10.2118/193912-ms. |
[45] |
Y. Zhu and N. Zabaras,
Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.
doi: 10.1016/j.jcp.2018.04.018. |
[46] |
Y. Zhu, N. Zabaras, P. Koutsourelakis and P. Perdikaris,
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.
doi: 10.1016/j.jcp.2019.05.024. |


















































Layers | Resolution |
Number of parameters | |
Input | - | ||
Convolution k7s2p3 | |||
Dense Block (1) K16L4 | |||
Encoding Layer | |||
Dense Block (2) K16L8 | |||
Decoding Layer (1) | |||
Dense Block (3) K16L4 | |||
Decoding Layer (2) | |||
Layers | Resolution |
Number of parameters | |
Input | - | ||
Convolution k7s2p3 | |||
Dense Block (1) K16L4 | |||
Encoding Layer | |||
Dense Block (2) K16L8 | |||
Decoding Layer (1) | |||
Dense Block (3) K16L4 | |||
Decoding Layer (2) | |||
Backend, Hardware | Wall-clock (s) for obtaining the basis functions | |
Fine-scale | Fenics, Intel Xeon |
- |
Multiscale | Matlab, Intel Xeon |
654.930 |
HM-DenseED | PyTorch, Intel Xeon Matlab |
|
HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
|
Bayesian HM-DenseED | PyTorch, Intel Xeon Matlab |
|
Bayesian HM-DenseED | PyTorch, NVIDIA Tesla Matlab |
Backend, Hardware | Wall-clock (s) for obtaining the basis functions | |
Fine-scale | Fenics, Intel Xeon |
- |
Multiscale | Matlab, Intel Xeon |
654.930 |
HM-DenseED | PyTorch, Intel Xeon Matlab |
|
HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
|
Bayesian HM-DenseED | PyTorch, Intel Xeon Matlab |
|
Bayesian HM-DenseED | PyTorch, NVIDIA Tesla Matlab |
Backend, Hardware | Wall-clock (s) for obtaining the pressure | |
Fine-scale | Fenics, Intel Xeon |
2300.822 |
Multiscale | Matlab, Intel Xeon |
1500.611 |
HM-DenseED | PyTorch, Intel Xeon Matlab |
|
HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
|
Bayesian HM-DenseED | PyTorch, Intel Xeon Matlab |
|
Bayesian HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
Backend, Hardware | Wall-clock (s) for obtaining the pressure | |
Fine-scale | Fenics, Intel Xeon |
2300.822 |
Multiscale | Matlab, Intel Xeon |
1500.611 |
HM-DenseED | PyTorch, Intel Xeon Matlab |
|
HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
|
Bayesian HM-DenseED | PyTorch, Intel Xeon Matlab |
|
Bayesian HM-DenseED | PyTorch, NVIDIA Tesla V100 Matlab |
Data |
|||
Configurations | |||
Data |
|||
Configurations | |||
Hybrid DenseED-multiscale | Hybrid fully-connected | |
Configuration | ||
Learning rate | ||
Weight decay | ||
Optimizer | Adam | Adam |
Epochs |
Hybrid DenseED-multiscale | Hybrid fully-connected | |
Configuration | ||
Learning rate | ||
Weight decay | ||
Optimizer | Adam | Adam |
Epochs |
[1] |
Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang. A backward SDE method for uncertainty quantification in deep learning. Discrete and Continuous Dynamical Systems - S, 2022, 15 (10) : 2807-2835. doi: 10.3934/dcdss.2022062 |
[2] |
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2021, 8 (2) : 131-152. doi: 10.3934/jcd.2021006 |
[3] |
Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics and Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117 |
[4] |
Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete and Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133 |
[5] |
Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002 |
[6] |
H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937 |
[7] |
Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553 |
[8] |
Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007 |
[9] |
Michael Herty, Elisa Iacomini. Uncertainty quantification in hierarchical vehicular flow models. Kinetic and Related Models, 2022, 15 (2) : 239-256. doi: 10.3934/krm.2022006 |
[10] |
H. N. Mhaskar, T. Poggio. Function approximation by deep networks. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4085-4095. doi: 10.3934/cpaa.2020181 |
[11] |
Seonho Park, Maciej Rysz, Kaitlin L. Fair, Panos M. Pardalos. Synthetic-Aperture Radar image based positioning in GPS-denied environments using Deep Cosine Similarity Neural Networks. Inverse Problems and Imaging, 2021, 15 (4) : 763-785. doi: 10.3934/ipi.2021013 |
[12] |
Harbir Antil, Thomas S. Brown, Rainald Löhner, Fumiya Togashi, Deepanshu Verma. Deep neural nets with fixed bias configuration. Numerical Algebra, Control and Optimization, 2022 doi: 10.3934/naco.2022016 |
[13] |
Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021 doi: 10.3934/fods.2021021 |
[14] |
Ziju Shen, Yufei Wang, Dufan Wu, Xu Yang, Bin Dong. Learning to scan: A deep reinforcement learning approach for personalized scanning in CT imaging. Inverse Problems and Imaging, 2022, 16 (1) : 179-195. doi: 10.3934/ipi.2021045 |
[15] |
Ying Sue Huang. Resynchronization of delayed neural networks. Discrete and Continuous Dynamical Systems, 2001, 7 (2) : 397-401. doi: 10.3934/dcds.2001.7.397 |
[16] |
Tatyana S. Turova. Structural phase transitions in neural networks. Mathematical Biosciences & Engineering, 2014, 11 (1) : 139-148. doi: 10.3934/mbe.2014.11.139 |
[17] |
Hyeontae Jo, Hwijae Son, Hyung Ju Hwang, Eun Heui Kim. Deep neural network approach to forward-inverse problems. Networks and Heterogeneous Media, 2020, 15 (2) : 247-259. doi: 10.3934/nhm.2020011 |
[18] |
Zheng Chen, Liu Liu, Lin Mu. Solving the linear transport equation by a deep neural network approach. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 669-686. doi: 10.3934/dcdss.2021070 |
[19] |
Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009 |
[20] |
Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019 |
Impact Factor:
Tools
Metrics
Other articles
by authors
[Back to Top]