December  2020, 2(4): 487-512. doi: 10.3934/fods.2020022

Certified and fast computations with shallow covariance kernels

1. 

MATH-ANCHP, École Polytechnique fédérale de Lausanne, 1015 Lausanne, Switzerland

2. 

Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, United Kingdom

3. 

Department of Mathematics and Computer Science, Eindhoven University of Technology, 5612 AZ Eindhoven, Netherlands

4. 

Department of Mathematics, Technical University of Munich, 85748 Garching, Germany

* Corresponding author: Jonas Latz

Received  November 2020 Revised  December 2020 Published  December 2020 Early access  December 2020

Fund Project: Jonas Latz and Elisabeth Ullmann acknowledge the support by Deutsche Forschungsgemeinschaft (DFG) and Technische Universität München (TUM) through the TUM International Graduate School of Science and Engineering (IGSSE) within the project 10.02 BAYES. The work of Stefano Massei has been supported by the SNSF research project Fast algorithms from low-rank updates; grant number: 200020_178806. The authors thank the anonymous referees for several comments and suggestions that helped to improve this manuscript

Many techniques for data science and uncertainty quantification demand efficient tools to handle Gaussian random fields, which are defined in terms of their mean functions and covariance operators. Recently, parameterized Gaussian random fields have gained increased attention, due to their higher degree of flexibility. However, especially if the random field is parameterized through its covariance operator, classical random field discretization techniques fail or become inefficient. In this work we introduce and analyze a new and certified algorithm for the low-rank approximation of a parameterized family of covariance operators which represents an extension of the adaptive cross approximation method for symmetric positive definite matrices. The algorithm relies on an affine linear expansion of the covariance operator with respect to the parameters, which needs to be computed in a preprocessing step using, e.g., the empirical interpolation method. We discuss and test our new approach for isotropic covariance kernels, such as Matérn kernels. The numerical results demonstrate the advantages of our approach in terms of computational time and confirm that the proposed algorithm provides the basis of a fast sampling procedure for parameter dependent Gaussian random fields.

Citation: Daniel Kressner, Jonas Latz, Stefano Massei, Elisabeth Ullmann. Certified and fast computations with shallow covariance kernels. Foundations of Data Science, 2020, 2 (4) : 487-512. doi: 10.3934/fods.2020022
References:
[1]

M. BachmayrI. G. GrahamV. K. Nguyen and R. Scheichl, Unified analysis of periodization-based sampling methods for Matérn covariances, SIAM Journal on Numerical Analysis, 58 (2020), 2953-2980.  doi: 10.1137/19M1269877.

[2]

M. BarraultY. MadayN. C. Nguyen and A. T. Patera, An 'empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris, 339 (2004), 667-672.  doi: 10.1016/j.crma.2004.08.006.

[3]

M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), 565-589.  doi: 10.1007/PL00005410.

[4]

M. Bebendorf, Adaptive cross approximation of multivariate functions, Constr. Approx., 34 (2011), 149-179.  doi: 10.1007/s00365-010-9103-x.

[5]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numer., 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.

[6]

A. A. Contreras, P. Mycek, O. P. Le Maȋtre, F. Rizzi, B. Debusschere and O. M. Knio, Parallel domain decomposition strategies for stochastic elliptic equations. Part A: Local Karhunen-Loève representations, SIAM J. Sci. Comput., 40 (2018), C520–C546. doi: 10.1137/17M1132185.

[7]

A. Damianou and N. Lawrence, Deep Gaussian processes, in Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics (AISTATS) (eds. C. Carvalho and P. Ravikumar), AISTATS '13, JMLR W & CP 31, (2013), 207–215.

[8]

C. R. Dietrich and G. N. Newsam, Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix, SIAM J. Sci. Comput., 18 (1997), 1088-1107.  doi: 10.1137/S1064827592240555.

[9]

M. M. Dunlop, M. A. Girolami, A. M. Stuart and A. L. Teckentrup, How deep are deep Gaussian processes?, J. Mach. Learn. Res., 19 (2018), 1–46, http://jmlr.org/papers/v19/18-015.html.

[10]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.

[11]

M. F. Emzir, S. J. Lasanen, Z. Purisha, L. Roininen and S. Särkkä, Non-stationary multi-layered Gaussian priors for Bayesian inversion, Inverse Problems, 37 (2021), 015002. doi: 10.1088/1361-6420/abc962.

[12]

M. F. Emzir, S. Lasanen, Z. Purisha and S. Särkkä, Hilbert-space reduced-rank methods for deep Gaussian processes, in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), (2019), 1–6.

[13]

M. FeischlF. Y. Kuo and I. H. Sloan, Fast random field generation with $H$-matrices, Numer. Math., 140 (2018), 639-676.  doi: 10.1007/s00211-018-0974-2.

[14]

L. Foster, A. Waagen, N. Aijaz and et al., Stable and efficient Gaussian process calculations, J. Mach. Learn. Res., 10 (2009), 857–882, http://www.jmlr.org/papers/v10/foster09a.html.

[15]

A. Garriga-Alonso, C. E. Rasmussen and L. Aitchison, Deep convolutional networks as shallow Gaussian processes, in 7th International Conference on Learning Representations, 2019.

[16]

M. Gelbrich, On a formula for the $L^2$ Wasserstein metric between measures on Euclidean and Hilbert spaces, Math. Nachr., 147 (1990), 185-203.  doi: 10.1002/mana.19901470121.

[17]

A. L. Gibbs and F. E. Su, On choosing and bounding probability metrics, Int. Stat. Rev., 70 (2002), 419-435. 

[18]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, Johns Hopkins University Press, Baltimore, MD, 2013.

[19]

I. G. GrahamF. Y. KuoD. NuyensR. Scheichl and I. H. Sloan, Analysis of circulant embedding methods for sampling stationary random fields, SIAM J. Numer. Anal., 56 (2018), 1871-1895.  doi: 10.1137/17M1149730.

[20]

N. HalkoP. G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.

[21]

H. HarbrechtM. Peters and R. Schneider, On the low-rank approximation by the pivoted Cholesky decomposition, Appl. Numer. Math., 62 (2012), 428-440.  doi: 10.1016/j.apnum.2011.10.001.

[22]

H. HarbrechtM. Peters and M. Siebenmorgen, Efficient approximation of random fields for numerical applications, Numer. Linear Algebra Appl., 22 (2015), 596-617.  doi: 10.1002/nla.1976.

[23]

O. HaugT. L. ThorarinsdottirS. H. Sørbye and C. L. E. Franzke, Spatial trend analysis of gridded temperature data at varying spatial scales, Advances in Statistical Climatology, Meteorology and Oceanography, 6 (2020), 1-12.  doi: 10.5194/ascmo-6-1-2020.

[24]

J. S. Hesthaven, G. Rozza and B. Stamm, Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer Briefs in Mathematics, Springer, Cham, 2016. doi: 10.1007/978-3-319-22470-1.

[25]

J. S. HesthavenB. Stamm and S. Zhang, Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods, ESAIM Math. Model. Numer. Anal., 48 (2014), 259-283.  doi: 10.1051/m2an/2013100.

[26]

N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. doi: 10.1137/1.9780898718027.

[27]

B. N. KhoromskijA. Litvinenko and H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loève expansion, Computing, 84 (2009), 49-67.  doi: 10.1007/s00607-008-0018-3.

[28]

U. KhristenkoL. ScarabosioP. SwierczynskiE. Ullmann and B. Wohlmuth, Analysis of boundary effects on PDE-based sampling of Whittle-Matérn random fields, SIAM/ASA J. Uncertain. Quantif., 7 (2019), 948-974.  doi: 10.1137/18M1215700.

[29]

J. LatzM. Eisenberger and E. Ullmann, Fast sampling of parameterised Gaussian random fields, Comput. Methods Appl. Mech. Engrg., 348 (2019), 978-1012.  doi: 10.1016/j.cma.2019.02.003.

[30]

F. LindgrenH. v. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B Stat. Methodol., 73 (2011), 423-498.  doi: 10.1111/j.1467-9868.2011.00777.x.

[31]

A. LitvinenkoY. SunM. G. Genton and D. E. Keyes, Likelihood approximation with hierarchical matrices for large spatial datasets, Comput. Statist. Data Anal., 137 (2019), 115-132.  doi: 10.1016/j.csda.2019.02.002.

[32]

S. L. Lohr, Sampling: Design and Analysis, 2nd edition, Brooks/Cole, Cengage Learning, Boston, MA, 2010.

[33]

V. MindenA. DamleK. L. Ho and L. Ying, Fast spatial Gaussian process maximum likelihood estimation via skeletonization factorizations, Multiscale Model. Simul., 15 (2017), 1584-1611.  doi: 10.1137/17M1116477.

[34]

A. Nouy, A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations, Comput. Methods Appl. Mech. Engrg., 199 (2010), 1603-1626.  doi: 10.1016/j.cma.2010.01.009.

[35] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, The MIT Press, 2006. 
[36]

A. K. SaibabaJ. Lee and P. K. Kitanidis, Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen-Loève expansion, Numer. Linear Algebra Appl., 23 (2016), 314-339.  doi: 10.1002/nla.2026.

[37]

F. Schäfer, T. J. Sullivan and H. Owhadi, Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity, arXiv preprint, arXiv: 1706.02205.

[38]

C. Schwab and R. A. Todor, Karhunen-Loève approximation of random fields by generalized fast multipole methods, J. Comput. Phys., 217 (2006), 100-122.  doi: 10.1016/j.jcp.2006.01.048.

[39]

I. SrajO. P. Le MaȋtreO. M. Knio and I. Hoteit, Coordinate transformation and polynomial chaos for the Bayesian inference of a Gaussian process with parametrized prior covariance function, Comput. Methods Appl. Mech. Engrg., 298 (2016), 205-228.  doi: 10.1016/j.cma.2015.10.002.

[40]

M. L. Stein, Interpolation of Spatial Data, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-1494-6.

[41]

A. M. Stuart and A. L. Teckentrup, Posterior consistency for Gaussian process approximations of Bayesian posterior distributions, Math. Comp., 87 (2018), 721-753.  doi: 10.1090/mcom/3244.

[42]

A. Townsend, Computing with Functions in two Dimensions, ProQuest LLC, Ann Arbor, MI, 2014, Thesis (D.Phil.)–University of Oxford (United Kingdom).

[43]

A. Townsend and L. N. Trefethen, An extension of Chebfun to two dimensions, SIAM J. Sci. Comput., 35 (2013), C495–C518. doi: 10.1137/130908002.

[44]

A. Townsend and L. N. Trefethen, Continuous analogues of matrix factorizations, Proc. A., 471 (2015), 20140585, 21. doi: 10.1098/rspa.2014.0585.

[45]

C. Villani, Optimal Transport, vol. 338 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-71050-9.

[46]

C. K. Wikle, Hierarchical models for uncertainty quantification: An overview, in Handbook of Uncertainty Quantification (eds. R. Ghanem, D. Higdon and H. Owhadi), Springer International Publishing, Cham, (2016), 1–26.

[47]

C. K. Williams and M. Seeger, Using the Nyström method to speed up kernel machines, in Advances in Neural Information Processing Systems, (2001), 682–688.

show all references

References:
[1]

M. BachmayrI. G. GrahamV. K. Nguyen and R. Scheichl, Unified analysis of periodization-based sampling methods for Matérn covariances, SIAM Journal on Numerical Analysis, 58 (2020), 2953-2980.  doi: 10.1137/19M1269877.

[2]

M. BarraultY. MadayN. C. Nguyen and A. T. Patera, An 'empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris, 339 (2004), 667-672.  doi: 10.1016/j.crma.2004.08.006.

[3]

M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), 565-589.  doi: 10.1007/PL00005410.

[4]

M. Bebendorf, Adaptive cross approximation of multivariate functions, Constr. Approx., 34 (2011), 149-179.  doi: 10.1007/s00365-010-9103-x.

[5]

H.-J. Bungartz and M. Griebel, Sparse grids, Acta Numer., 13 (2004), 147-269.  doi: 10.1017/S0962492904000182.

[6]

A. A. Contreras, P. Mycek, O. P. Le Maȋtre, F. Rizzi, B. Debusschere and O. M. Knio, Parallel domain decomposition strategies for stochastic elliptic equations. Part A: Local Karhunen-Loève representations, SIAM J. Sci. Comput., 40 (2018), C520–C546. doi: 10.1137/17M1132185.

[7]

A. Damianou and N. Lawrence, Deep Gaussian processes, in Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics (AISTATS) (eds. C. Carvalho and P. Ravikumar), AISTATS '13, JMLR W & CP 31, (2013), 207–215.

[8]

C. R. Dietrich and G. N. Newsam, Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix, SIAM J. Sci. Comput., 18 (1997), 1088-1107.  doi: 10.1137/S1064827592240555.

[9]

M. M. Dunlop, M. A. Girolami, A. M. Stuart and A. L. Teckentrup, How deep are deep Gaussian processes?, J. Mach. Learn. Res., 19 (2018), 1–46, http://jmlr.org/papers/v19/18-015.html.

[10]

M. M. DunlopM. A. Iglesias and A. M. Stuart, Hierarchical Bayesian level set inversion, Statistics and Computing, 27 (2017), 1555-1584.  doi: 10.1007/s11222-016-9704-8.

[11]

M. F. Emzir, S. J. Lasanen, Z. Purisha, L. Roininen and S. Särkkä, Non-stationary multi-layered Gaussian priors for Bayesian inversion, Inverse Problems, 37 (2021), 015002. doi: 10.1088/1361-6420/abc962.

[12]

M. F. Emzir, S. Lasanen, Z. Purisha and S. Särkkä, Hilbert-space reduced-rank methods for deep Gaussian processes, in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), (2019), 1–6.

[13]

M. FeischlF. Y. Kuo and I. H. Sloan, Fast random field generation with $H$-matrices, Numer. Math., 140 (2018), 639-676.  doi: 10.1007/s00211-018-0974-2.

[14]

L. Foster, A. Waagen, N. Aijaz and et al., Stable and efficient Gaussian process calculations, J. Mach. Learn. Res., 10 (2009), 857–882, http://www.jmlr.org/papers/v10/foster09a.html.

[15]

A. Garriga-Alonso, C. E. Rasmussen and L. Aitchison, Deep convolutional networks as shallow Gaussian processes, in 7th International Conference on Learning Representations, 2019.

[16]

M. Gelbrich, On a formula for the $L^2$ Wasserstein metric between measures on Euclidean and Hilbert spaces, Math. Nachr., 147 (1990), 185-203.  doi: 10.1002/mana.19901470121.

[17]

A. L. Gibbs and F. E. Su, On choosing and bounding probability metrics, Int. Stat. Rev., 70 (2002), 419-435. 

[18]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, Johns Hopkins University Press, Baltimore, MD, 2013.

[19]

I. G. GrahamF. Y. KuoD. NuyensR. Scheichl and I. H. Sloan, Analysis of circulant embedding methods for sampling stationary random fields, SIAM J. Numer. Anal., 56 (2018), 1871-1895.  doi: 10.1137/17M1149730.

[20]

N. HalkoP. G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806.

[21]

H. HarbrechtM. Peters and R. Schneider, On the low-rank approximation by the pivoted Cholesky decomposition, Appl. Numer. Math., 62 (2012), 428-440.  doi: 10.1016/j.apnum.2011.10.001.

[22]

H. HarbrechtM. Peters and M. Siebenmorgen, Efficient approximation of random fields for numerical applications, Numer. Linear Algebra Appl., 22 (2015), 596-617.  doi: 10.1002/nla.1976.

[23]

O. HaugT. L. ThorarinsdottirS. H. Sørbye and C. L. E. Franzke, Spatial trend analysis of gridded temperature data at varying spatial scales, Advances in Statistical Climatology, Meteorology and Oceanography, 6 (2020), 1-12.  doi: 10.5194/ascmo-6-1-2020.

[24]

J. S. Hesthaven, G. Rozza and B. Stamm, Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer Briefs in Mathematics, Springer, Cham, 2016. doi: 10.1007/978-3-319-22470-1.

[25]

J. S. HesthavenB. Stamm and S. Zhang, Efficient greedy algorithms for high-dimensional parameter spaces with applications to empirical interpolation and reduced basis methods, ESAIM Math. Model. Numer. Anal., 48 (2014), 259-283.  doi: 10.1051/m2an/2013100.

[26]

N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd edition, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. doi: 10.1137/1.9780898718027.

[27]

B. N. KhoromskijA. Litvinenko and H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Loève expansion, Computing, 84 (2009), 49-67.  doi: 10.1007/s00607-008-0018-3.

[28]

U. KhristenkoL. ScarabosioP. SwierczynskiE. Ullmann and B. Wohlmuth, Analysis of boundary effects on PDE-based sampling of Whittle-Matérn random fields, SIAM/ASA J. Uncertain. Quantif., 7 (2019), 948-974.  doi: 10.1137/18M1215700.

[29]

J. LatzM. Eisenberger and E. Ullmann, Fast sampling of parameterised Gaussian random fields, Comput. Methods Appl. Mech. Engrg., 348 (2019), 978-1012.  doi: 10.1016/j.cma.2019.02.003.

[30]

F. LindgrenH. v. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach, J. R. Stat. Soc. Ser. B Stat. Methodol., 73 (2011), 423-498.  doi: 10.1111/j.1467-9868.2011.00777.x.

[31]

A. LitvinenkoY. SunM. G. Genton and D. E. Keyes, Likelihood approximation with hierarchical matrices for large spatial datasets, Comput. Statist. Data Anal., 137 (2019), 115-132.  doi: 10.1016/j.csda.2019.02.002.

[32]

S. L. Lohr, Sampling: Design and Analysis, 2nd edition, Brooks/Cole, Cengage Learning, Boston, MA, 2010.

[33]

V. MindenA. DamleK. L. Ho and L. Ying, Fast spatial Gaussian process maximum likelihood estimation via skeletonization factorizations, Multiscale Model. Simul., 15 (2017), 1584-1611.  doi: 10.1137/17M1116477.

[34]

A. Nouy, A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations, Comput. Methods Appl. Mech. Engrg., 199 (2010), 1603-1626.  doi: 10.1016/j.cma.2010.01.009.

[35] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, The MIT Press, 2006. 
[36]

A. K. SaibabaJ. Lee and P. K. Kitanidis, Randomized algorithms for generalized Hermitian eigenvalue problems with application to computing Karhunen-Loève expansion, Numer. Linear Algebra Appl., 23 (2016), 314-339.  doi: 10.1002/nla.2026.

[37]

F. Schäfer, T. J. Sullivan and H. Owhadi, Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity, arXiv preprint, arXiv: 1706.02205.

[38]

C. Schwab and R. A. Todor, Karhunen-Loève approximation of random fields by generalized fast multipole methods, J. Comput. Phys., 217 (2006), 100-122.  doi: 10.1016/j.jcp.2006.01.048.

[39]

I. SrajO. P. Le MaȋtreO. M. Knio and I. Hoteit, Coordinate transformation and polynomial chaos for the Bayesian inference of a Gaussian process with parametrized prior covariance function, Comput. Methods Appl. Mech. Engrg., 298 (2016), 205-228.  doi: 10.1016/j.cma.2015.10.002.

[40]

M. L. Stein, Interpolation of Spatial Data, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-1494-6.

[41]

A. M. Stuart and A. L. Teckentrup, Posterior consistency for Gaussian process approximations of Bayesian posterior distributions, Math. Comp., 87 (2018), 721-753.  doi: 10.1090/mcom/3244.

[42]

A. Townsend, Computing with Functions in two Dimensions, ProQuest LLC, Ann Arbor, MI, 2014, Thesis (D.Phil.)–University of Oxford (United Kingdom).

[43]

A. Townsend and L. N. Trefethen, An extension of Chebfun to two dimensions, SIAM J. Sci. Comput., 35 (2013), C495–C518. doi: 10.1137/130908002.

[44]

A. Townsend and L. N. Trefethen, Continuous analogues of matrix factorizations, Proc. A., 471 (2015), 20140585, 21. doi: 10.1098/rspa.2014.0585.

[45]

C. Villani, Optimal Transport, vol. 338 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-71050-9.

[46]

C. K. Wikle, Hierarchical models for uncertainty quantification: An overview, in Handbook of Uncertainty Quantification (eds. R. Ghanem, D. Higdon and H. Owhadi), Springer International Publishing, Cham, (2016), 1–26.

[47]

C. K. Williams and M. Seeger, Using the Nyström method to speed up kernel machines, in Advances in Neural Information Processing Systems, (2001), 682–688.

Figure 1.  Samples of Gaussian processes on $ [0, 1] \times [0, 1] $. Each of the rows contains 6 independent samples. The first two rows show mean-zero processes with Matérn covariance kernel, with smoothness $ 2.5 $ and correlation length $ 0.1 $ (first row) and $ 0.25 $ (second row). The third and fourth row show mean-zero processes with Gaussian covariance kernel, with correlation length $ 0.1 $ (third row) and $ 0.25 $ (fourth row); see Section 4.1 for details. The processes are discretised as discussed in the beginning of Section 5.1 with $ n = 10^4 $ grid points. The sampling is carried out by computing the matrix square root of the respective covariance matrix
Figure 2.  Illustration of a parameterized Gaussian process. On the inner layer, a random variable $ \boldsymbol \theta $ with law $ \mu' $ is sampled. On the outer layer, the process $ X \sim M(\cdot|\boldsymbol \theta) $ is sampled. Then, $ X $ follows the distribution of the hierarchical Gaussian process $ \mu $
Figure 3.  Wasserstein distance $ \mathrm{W}_{2}(\kappa, \widehat{\kappa}_i) $ and trace residual $ \mathrm{trace}(C-\widehat{C}^i)^{1/2} $ (vertical axes) vs. rank $ i $ of the covariances for the Gaussian measures (horizontal axes) considered in Example 2.3 (i)-(ii). In the left figure, Wasserstein distance and trace residual coincide; showing tightness of the bound in Example 2.3(i). In the right figure, Example 2.3(ii), the bound is not tight
Figure 4.  Gaussian covariance kernel. Error associated with the approximate covariance kernel $ \tilde c_s $ vs. length of the expansion $ s $, in the case $ \theta \in [0.1, \sqrt{2}] $ (left) and $ \theta \in [0.001, \sqrt{2}] $ (right). Each curve represents the infinity norm $ \lVert{c(\cdot, \theta)-\tilde c_s(\cdot, \theta)}\rVert_\infty $ for a fixed value of $ \theta $, approximated by taking the maximum over 500 equispaced values for $ d $ in $ [0, \sqrt{2}] $
Figure 5.  Matérn covariance kernel. Error associated with the approximate covariance kernel $ \tilde c_s $ vs. length of the expansion $ s $, for $ ( \theta_1, \theta_2)\in [0.1, \sqrt{2}]\times[2.5, 7.5] $. Each curve represents the infinity norm $ \lVert{c(\cdot, \boldsymbol \theta)-\tilde c_s(\cdot, \boldsymbol \theta)}\rVert_\infty $ for a fixed pair $ \boldsymbol \theta = ( \theta_1, \theta_2) $, approximated by taking the maximum over 500 equispaced values for $ d $ in $ [0, \sqrt{2}] $
Figure 6.  Schematic of the definitions of the nodes $ \mathbf{x}_{1}, \ldots, \mathbf{x}_{n} $, for $ n_0 \in \{4, 8\} $
Figure 7.  Test 1. Error vs. number of iterations for Algorithm 2 applied to a discretized Gaussian covariance operator
Figure 8.  Test 2. Decimal logarithms of the largest 70 eigenvalues $ \mathbf{C}(\theta) $ for different values of $ \theta \in [0.1, \sqrt{2}] $ and $ n_0 = 128 $
Figure 9.  Test 3. Timings in seconds (left) and ranks (right) obtained from the ACA algorithm applied to different discretizations of the covariance operator. The x-axes of both figures show the degrees of freedom $ n = n_0^2 $ of the discretized covariance operators. The y-axes show the mean elapsed time over three runs for the figure on the left and the quantity $ | \Theta_f| $ for the table on the right
Figure 10.  Test $ 4 $. Comparison between the convergence history of Algorithm 1 (blue), the maximum nuclear norm of $ \mathbf{C}( \theta)- \mathbf{ C}_{sI}( \theta) $ over $ \Theta_f $ (green) and the best trace error associated with the truncated SVDs $ \max_{ \theta\in \Theta_f} \mathrm{trace}(\mathbf{C}( \theta)- \mathbf{C}_{(k)}( \theta)) $ (red)
Figure 11.  Test 5. Time consumption vs. number of samples for the sampling strategies based on Algorithm 1 and Algorithm 2, applied to a discretized Gaussian covariance operator
Figure 12.  Error vs. number of iterations for Algorithm 2 applied to a discretized 2D Matérn kernel with $ \theta_1\in[0.1, \sqrt 2] $ and $ \theta_2 = 2.5 $
[1]

Johnathan M. Bardsley. Gaussian Markov random field priors for inverse problems. Inverse Problems and Imaging, 2013, 7 (2) : 397-416. doi: 10.3934/ipi.2013.7.397

[2]

Sheng Zhang, Xiu Yang, Samy Tindel, Guang Lin. Augmented Gaussian random field: Theory and computation. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 931-957. doi: 10.3934/dcdss.2021098

[3]

Justyna Jarczyk, Witold Jarczyk. Gaussian iterative algorithm and integrated automorphism equation for random means. Discrete and Continuous Dynamical Systems, 2020, 40 (12) : 6837-6844. doi: 10.3934/dcds.2020135

[4]

Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2345-2366. doi: 10.3934/jimo.2020072

[5]

Marie Turčičová, Jan Mandel, Kryštof Eben. Score matching filters for Gaussian Markov random fields with a linear model of the precision matrix. Foundations of Data Science, 2021, 3 (4) : 793-824. doi: 10.3934/fods.2021030

[6]

Xing Huang, Feng-Yu Wang. Mckean-Vlasov sdes with drifts discontinuous under wasserstein distance. Discrete and Continuous Dynamical Systems, 2021, 41 (4) : 1667-1679. doi: 10.3934/dcds.2020336

[7]

Lu Han, Min Li, Dachuan Xu, Dongmei Zhang. Stochastic-Lazier-Greedy Algorithm for monotone non-submodular maximization. Journal of Industrial and Management Optimization, 2021, 17 (5) : 2607-2614. doi: 10.3934/jimo.2020085

[8]

Qing Ma, Yanjun Wang. Distributionally robust chance constrained svm model with $\ell_2$-Wasserstein distance. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021212

[9]

Rodolfo Mendoza-Gómez, Roger Z. Ríos-Mercado, Karla B. Valenzuela-Ocaña. An iterated greedy algorithm with variable neighborhood descent for the planning of specialized diagnostic services in a segmented healthcare system. Journal of Industrial and Management Optimization, 2020, 16 (2) : 857-885. doi: 10.3934/jimo.2018182

[10]

Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems and Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487

[11]

Qin Sheng, David A. Voss, Q. M. Khaliq. An adaptive splitting algorithm for the sine-Gordon equation. Conference Publications, 2005, 2005 (Special) : 792-797. doi: 10.3934/proc.2005.2005.792

[12]

Aku Kammonen, Jonas Kiessling, Petr Plecháč, Mattias Sandberg, Anders Szepessy. Adaptive random Fourier features with Metropolis sampling. Foundations of Data Science, 2020, 2 (3) : 309-332. doi: 10.3934/fods.2020014

[13]

Theodore Papamarkou, Alexey Lindo, Eric B. Ford. Geometric adaptive Monte Carlo in random environment. Foundations of Data Science, 2021, 3 (2) : 201-224. doi: 10.3934/fods.2021014

[14]

Seung-Yeal Ha, Doheon Kim, Bora Moon. Interplay of random inputs and adaptive couplings in the Winfree model. Communications on Pure and Applied Analysis, 2021, 20 (11) : 3975-4006. doi: 10.3934/cpaa.2021140

[15]

Lingling Lv, Zhe Zhang, Lei Zhang, Weishu Wang. An iterative algorithm for periodic sylvester matrix equations. Journal of Industrial and Management Optimization, 2018, 14 (1) : 413-425. doi: 10.3934/jimo.2017053

[16]

Shi Yan, Jun Liu, Haiyang Huang, Xue-Cheng Tai. A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Problems and Imaging, 2019, 13 (3) : 653-677. doi: 10.3934/ipi.2019030

[17]

Sanming Liu, Zhijie Wang, Chongyang Liu. Proximal iterative Gaussian smoothing algorithm for a class of nonsmooth convex minimization problems. Numerical Algebra, Control and Optimization, 2015, 5 (1) : 79-89. doi: 10.3934/naco.2015.5.79

[18]

Brittany Froese Hamfeldt. Convergent approximation of non-continuous surfaces of prescribed Gaussian curvature. Communications on Pure and Applied Analysis, 2018, 17 (2) : 671-707. doi: 10.3934/cpaa.2018036

[19]

Antonio De Rosa, Domenico Angelo La Manna. A non local approximation of the Gaussian perimeter: Gamma convergence and Isoperimetric properties. Communications on Pure and Applied Analysis, 2021, 20 (5) : 2101-2116. doi: 10.3934/cpaa.2021059

[20]

Max-Olivier Hongler. Mean-field games and swarms dynamics in Gaussian and non-Gaussian environments. Journal of Dynamics and Games, 2020, 7 (1) : 1-20. doi: 10.3934/jdg.2020001

 Impact Factor: 

Metrics

  • PDF downloads (215)
  • HTML views (345)
  • Cited by (0)

[Back to Top]