August  2020, 19(8): 4111-4126. doi: 10.3934/cpaa.2020183

Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems

Institute of Mathematics, University of Potsdam, Karl-Liebknecht-Straße 24-25, 14476 Potsdam, Germany

Received  August 2019 Revised  January 2020 Published  May 2020

Fund Project: This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG)-SFB1294/1-318763901

In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.

Citation: Abhishake Rastogi. Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4111-4126. doi: 10.3934/cpaa.2020183
References:
[1]

Abhishake, G. Blanchard and P. Mathé, Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems, preprint, arXiv: 1902.05404.

[2]

N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc., 68 (1950), 337-404.  doi: 10.2307/1990404.

[3]

F. BauerT. Hohage and A. Munk, Iteratively regularized Gauss–Newton method for nonlinear inverse problems with random noise, SIAM J. Numer. Anal., 47 (2009), 1827-1846.  doi: 10.1137/080721789.

[4]

N. BissantzT. Hohage and A. Munk, Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise, Inverse Probl., 20 (2004), 1773-1789.  doi: 10.1088/0266-5611/20/6/005.

[5]

G. Blanchard and P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration, Inverse Probl., 28 (2012), Art. 115011. doi: 10.1088/0266-5611/28/11/115011.

[6]

G. Blanchard, P. Mathé and N. Mücke, Lepskii Principle in Supervised Learning, preprint, arXiv: 1905.10764.

[7]

G. Blanchard and N. Mücke, Optimal rates for regularization of statistical inverse learning problems, Found. Comput. Math., 18 (2018), 971-1013.  doi: 10.1007/s10208-017-9359-7.

[8]

G. Blanchard and N. Mucke, Kernel Regression, Minimax Rates and Effective Dimensionality: Beyond the Regular Case, Anal. Appl., to Appear (2020). doi: 10.1142/S0219530519500258.

[9]

A. BöttcherB. HofmannU. Tautenhahn and M. Yamamoto, Convergence rates for Tikhonov regularization from different kinds of smoothness conditions, Appl. Anal., 85 (2006), 555-578.  doi: 10.1080/00036810500474838.

[10]

A. Caponnetto and E. De Vito, Optimal rates for the regularized least-squares algorithm, Found. Comput. Math., 7 (2007), 331-368.  doi: 10.1007/s10208-006-0196-8.

[11]

L. Cavalier, Inverse problems in statistics, in Inverse Probl. High-dimensional Estim., vol. 203 of Lect. Notes Stat. Proc., Springer, Heidelberg, (2011), 3–96. doi: 10.1007/978-3-642-19989-9_1.

[12]

H. Egger and B. Hofmann, Tikhonov regularization in Hilbert scales under conditional stability assumptions, Inverse Probl., 34 (2018), Art. 115015. doi: 10.1088/1361-6420/aadef4.

[13]

H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Math. Appl., vol. 375, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996.

[14]

Z. C. Guo, S. B. Lin and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse Probl., 33 (2017), Art. 74009. doi: 10.1088/1361-6420/aa72b2.

[15]

B. Hofmann, Regularization for Applied Inverse and Ill-Posed Problems, vol. 85, BSB BG Teubner Verlagsgesellschaft, Leipzig, 1986. doi: 10.1007/978-3-322-93034-7.

[16]

B. Hofmann and P. Mathé, Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales, Inverse Probl., 34 (2018), Art. 15007. doi: 10.1088/1361-6420/aa9b59.

[17]

T. Hohage and M. Pricop, Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise, Inverse Probl. Imaging, 2 (2008), 271-290.  doi: 10.3934/ipi.2008.2.271.

[18]

J. KrebsA. K. Louis and H. Wendland, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, J. Inverse Ill-Posed Probl., 17 (2009), 845-869.  doi: 10.1515/JIIP.2009.050.

[19]

K. LinS. Lu and P. Mathé, Oracle-type posterior contraction rates in Bayesian inverse problems, Inverse Probl. Imaging, 9 (2015), 895-915.  doi: 10.3934/ipi.2015.9.895.

[20]

S. B. Lin and D. X. Zhou, Optimal Learning Rates for Kernel Partial Least Squares, J. Fourier Anal. Appl., 24 (2018), 908-933.  doi: 10.1007/s00041-017-9544-8.

[21]

J. M. Loubes and C. Ludena, Penalized estimators for non linear inverse problems, ESAIM Probab. Statist., 14 (2010), 173-191.  doi: 10.1051/ps:2008024.

[22]

S. LuP. Mathé and S. V. Pereverzev, Balancing principle in supervised learning for a general regularization scheme, Appl. Comput. Harmon. Anal., 48 (2020), 123-148.  doi: 10.1016/j.acha.2018.03.001.

[23]

P. Mathé and U. Tautenhahn, Interpolation in variable Hilbert scales with application to inverse problems, Inverse Probl., 22 (2006), 2271-2297.  doi: 10.1088/0266-5611/22/6/022.

[24]

C. A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Comput., 17 (2005), 177-204.  doi: 10.1162/0899766052530802.

[25]

M. T. Nair and S. V. Pereverzev, Regularized collocation method for Fredholm integral equations of the first kind, J. Complexity, 23 (2007), 454-467.  doi: 10.1016/j.jco.2006.09.002.

[26]

M. T. NairS. V. Pereverzev and U. Tautenhahn, Regularization in Hilbert scales under general smoothing conditions, Inverse Probl., 21 (2005), 1851-1869.  doi: 10.1088/0266-5611/21/6/003.

[27]

F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), 29-37.  doi: 10.1080/00036818408839508.

[28]

A. Neubauer, Tikhonov regularization of nonlinear ill-posed problems in Hilbert scales, Appl. Anal., 46 (1992), 59-72.  doi: 10.1080/00036819208840111.

[29]

F. O'Sullivan, Convergence characteristics of methods of regularization estimators for nonlinear operator equations, SIAM J. Numer. Anal., 27 (1990), 1635-1649.  doi: 10.1137/0727096.

[30]

A. Rastogi and S. Sampath, Optimal rates for the regularized learning algorithms under general source condition, Front. Appl. Math. Stat., 3 (2017), Art. 3. doi: 10.3389/fams.2017.00003.

[31]

T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization methods in Banach spaces, Radon Series on Computational and Applied Mathematics, vol. 10, Walter de Gruyter GmbH & Co. KG, Berlin, 2012. doi: 10.1515/9783110255720.

[32]

U. Tautenhahn, Error estimates for regularization methods in Hilbert scales, SIAM J. Numer. Anal., 33 (1996), 2120-2130.  doi: 10.1137/S0036142994269411.

[33]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems, vol. 14, W. H. Winston, Washington, DC, 1977.

[34]

F. Werner and B. Hofmann, Convergence analysis of (statistical) inverse problems under conditional stability estimates, Inverse Probl., 36 (2020), Art. 015004. doi: 10.1088/1361-6420/ab4cd7.

[35]

T. Zhang, Effective dimension and generalization of kernel learning, in Proc. 15th Int. Conf. Neural Inf. Process. Syst., MIT Press, Cambridge, MA, (2002), 454–461.

show all references

References:
[1]

Abhishake, G. Blanchard and P. Mathé, Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems, preprint, arXiv: 1902.05404.

[2]

N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc., 68 (1950), 337-404.  doi: 10.2307/1990404.

[3]

F. BauerT. Hohage and A. Munk, Iteratively regularized Gauss–Newton method for nonlinear inverse problems with random noise, SIAM J. Numer. Anal., 47 (2009), 1827-1846.  doi: 10.1137/080721789.

[4]

N. BissantzT. Hohage and A. Munk, Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise, Inverse Probl., 20 (2004), 1773-1789.  doi: 10.1088/0266-5611/20/6/005.

[5]

G. Blanchard and P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration, Inverse Probl., 28 (2012), Art. 115011. doi: 10.1088/0266-5611/28/11/115011.

[6]

G. Blanchard, P. Mathé and N. Mücke, Lepskii Principle in Supervised Learning, preprint, arXiv: 1905.10764.

[7]

G. Blanchard and N. Mücke, Optimal rates for regularization of statistical inverse learning problems, Found. Comput. Math., 18 (2018), 971-1013.  doi: 10.1007/s10208-017-9359-7.

[8]

G. Blanchard and N. Mucke, Kernel Regression, Minimax Rates and Effective Dimensionality: Beyond the Regular Case, Anal. Appl., to Appear (2020). doi: 10.1142/S0219530519500258.

[9]

A. BöttcherB. HofmannU. Tautenhahn and M. Yamamoto, Convergence rates for Tikhonov regularization from different kinds of smoothness conditions, Appl. Anal., 85 (2006), 555-578.  doi: 10.1080/00036810500474838.

[10]

A. Caponnetto and E. De Vito, Optimal rates for the regularized least-squares algorithm, Found. Comput. Math., 7 (2007), 331-368.  doi: 10.1007/s10208-006-0196-8.

[11]

L. Cavalier, Inverse problems in statistics, in Inverse Probl. High-dimensional Estim., vol. 203 of Lect. Notes Stat. Proc., Springer, Heidelberg, (2011), 3–96. doi: 10.1007/978-3-642-19989-9_1.

[12]

H. Egger and B. Hofmann, Tikhonov regularization in Hilbert scales under conditional stability assumptions, Inverse Probl., 34 (2018), Art. 115015. doi: 10.1088/1361-6420/aadef4.

[13]

H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Math. Appl., vol. 375, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996.

[14]

Z. C. Guo, S. B. Lin and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse Probl., 33 (2017), Art. 74009. doi: 10.1088/1361-6420/aa72b2.

[15]

B. Hofmann, Regularization for Applied Inverse and Ill-Posed Problems, vol. 85, BSB BG Teubner Verlagsgesellschaft, Leipzig, 1986. doi: 10.1007/978-3-322-93034-7.

[16]

B. Hofmann and P. Mathé, Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales, Inverse Probl., 34 (2018), Art. 15007. doi: 10.1088/1361-6420/aa9b59.

[17]

T. Hohage and M. Pricop, Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise, Inverse Probl. Imaging, 2 (2008), 271-290.  doi: 10.3934/ipi.2008.2.271.

[18]

J. KrebsA. K. Louis and H. Wendland, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, J. Inverse Ill-Posed Probl., 17 (2009), 845-869.  doi: 10.1515/JIIP.2009.050.

[19]

K. LinS. Lu and P. Mathé, Oracle-type posterior contraction rates in Bayesian inverse problems, Inverse Probl. Imaging, 9 (2015), 895-915.  doi: 10.3934/ipi.2015.9.895.

[20]

S. B. Lin and D. X. Zhou, Optimal Learning Rates for Kernel Partial Least Squares, J. Fourier Anal. Appl., 24 (2018), 908-933.  doi: 10.1007/s00041-017-9544-8.

[21]

J. M. Loubes and C. Ludena, Penalized estimators for non linear inverse problems, ESAIM Probab. Statist., 14 (2010), 173-191.  doi: 10.1051/ps:2008024.

[22]

S. LuP. Mathé and S. V. Pereverzev, Balancing principle in supervised learning for a general regularization scheme, Appl. Comput. Harmon. Anal., 48 (2020), 123-148.  doi: 10.1016/j.acha.2018.03.001.

[23]

P. Mathé and U. Tautenhahn, Interpolation in variable Hilbert scales with application to inverse problems, Inverse Probl., 22 (2006), 2271-2297.  doi: 10.1088/0266-5611/22/6/022.

[24]

C. A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Comput., 17 (2005), 177-204.  doi: 10.1162/0899766052530802.

[25]

M. T. Nair and S. V. Pereverzev, Regularized collocation method for Fredholm integral equations of the first kind, J. Complexity, 23 (2007), 454-467.  doi: 10.1016/j.jco.2006.09.002.

[26]

M. T. NairS. V. Pereverzev and U. Tautenhahn, Regularization in Hilbert scales under general smoothing conditions, Inverse Probl., 21 (2005), 1851-1869.  doi: 10.1088/0266-5611/21/6/003.

[27]

F. Natterer, Error bounds for Tikhonov regularization in Hilbert scales, Appl. Anal., 18 (1984), 29-37.  doi: 10.1080/00036818408839508.

[28]

A. Neubauer, Tikhonov regularization of nonlinear ill-posed problems in Hilbert scales, Appl. Anal., 46 (1992), 59-72.  doi: 10.1080/00036819208840111.

[29]

F. O'Sullivan, Convergence characteristics of methods of regularization estimators for nonlinear operator equations, SIAM J. Numer. Anal., 27 (1990), 1635-1649.  doi: 10.1137/0727096.

[30]

A. Rastogi and S. Sampath, Optimal rates for the regularized learning algorithms under general source condition, Front. Appl. Math. Stat., 3 (2017), Art. 3. doi: 10.3389/fams.2017.00003.

[31]

T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization methods in Banach spaces, Radon Series on Computational and Applied Mathematics, vol. 10, Walter de Gruyter GmbH & Co. KG, Berlin, 2012. doi: 10.1515/9783110255720.

[32]

U. Tautenhahn, Error estimates for regularization methods in Hilbert scales, SIAM J. Numer. Anal., 33 (1996), 2120-2130.  doi: 10.1137/S0036142994269411.

[33]

A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-posed Problems, vol. 14, W. H. Winston, Washington, DC, 1977.

[34]

F. Werner and B. Hofmann, Convergence analysis of (statistical) inverse problems under conditional stability estimates, Inverse Probl., 36 (2020), Art. 015004. doi: 10.1088/1361-6420/ab4cd7.

[35]

T. Zhang, Effective dimension and generalization of kernel learning, in Proc. 15th Int. Conf. Neural Inf. Process. Syst., MIT Press, Cambridge, MA, (2002), 454–461.

[1]

Thorsten Hohage, Mihaela Pricop. Nonlinear Tikhonov regularization in Hilbert scales for inverse boundary value problems with random noise. Inverse Problems and Imaging, 2008, 2 (2) : 271-290. doi: 10.3934/ipi.2008.2.271

[2]

Kaitlyn (Voccola) Muller. A reproducing kernel Hilbert space framework for inverse scattering problems within the Born approximation. Inverse Problems and Imaging, 2019, 13 (6) : 1327-1348. doi: 10.3934/ipi.2019058

[3]

Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems and Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048

[4]

Matthew A. Fury. Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space. Conference Publications, 2013, 2013 (special) : 259-272. doi: 10.3934/proc.2013.2013.259

[5]

De-han Chen, Daijun jiang. Convergence rates of Tikhonov regularization for recovering growth rates in a Lotka-Volterra competition model with diffusion. Inverse Problems and Imaging, 2021, 15 (5) : 951-974. doi: 10.3934/ipi.2021023

[6]

Abbas Ja'afaru Badakaya, Aminu Sulaiman Halliru, Jamilu Adamu. Game value for a pursuit-evasion differential game problem in a Hilbert space. Journal of Dynamics and Games, 2022, 9 (1) : 1-12. doi: 10.3934/jdg.2021019

[7]

Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems and Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291

[8]

Bernd Hofmann, Barbara Kaltenbacher, Elena Resmerita. Lavrentiev's regularization method in Hilbert spaces revisited. Inverse Problems and Imaging, 2016, 10 (3) : 741-764. doi: 10.3934/ipi.2016019

[9]

Markus Hegland, Bernd Hofmann. Errors of regularisation under range inclusions using variable Hilbert scales. Inverse Problems and Imaging, 2011, 5 (3) : 619-643. doi: 10.3934/ipi.2011.5.619

[10]

Daniel Alpay, Mihai Putinar, Victor Vinnikov. A Hilbert space approach to bounded analytic extension in the ball. Communications on Pure and Applied Analysis, 2003, 2 (2) : 139-145. doi: 10.3934/cpaa.2003.2.139

[11]

Anna Karczewska, Carlos Lizama. On stochastic fractional Volterra equations in Hilbert space. Conference Publications, 2007, 2007 (Special) : 541-550. doi: 10.3934/proc.2007.2007.541

[12]

Onur Alp İlhan. Solvability of some partial integral equations in Hilbert space. Communications on Pure and Applied Analysis, 2008, 7 (4) : 837-844. doi: 10.3934/cpaa.2008.7.837

[13]

Mahmoud M. El-Borai. On some fractional differential equations in the Hilbert space. Conference Publications, 2005, 2005 (Special) : 233-240. doi: 10.3934/proc.2005.2005.233

[14]

P. Chiranjeevi, V. Kannan, Sharan Gopal. Periodic points and periods for operators on hilbert space. Discrete and Continuous Dynamical Systems, 2013, 33 (9) : 4233-4237. doi: 10.3934/dcds.2013.33.4233

[15]

Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete and Continuous Dynamical Systems - S, 2021, 14 (8) : 3043-3054. doi: 10.3934/dcdss.2020463

[16]

Stefan Kindermann, Antonio Leitão. Convergence rates for Kaczmarz-type regularization methods. Inverse Problems and Imaging, 2014, 8 (1) : 149-172. doi: 10.3934/ipi.2014.8.149

[17]

D. Novikov and S. Yakovenko. Tangential Hilbert problem for perturbations of hyperelliptic Hamiltonian systems. Electronic Research Announcements, 1999, 5: 55-65.

[18]

Vladimir S. Gerdjikov, Rossen I. Ivanov, Aleksander A. Stefanov. Riemann-Hilbert problem, integrability and reductions. Journal of Geometric Mechanics, 2019, 11 (2) : 167-185. doi: 10.3934/jgm.2019009

[19]

Sylvia Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. Discrete and Continuous Dynamical Systems, 2011, 31 (4) : 1427-1451. doi: 10.3934/dcds.2011.31.1427

[20]

Jussi Korpela, Matti Lassas, Lauri Oksanen. Discrete regularization and convergence of the inverse problem for 1+1 dimensional wave equation. Inverse Problems and Imaging, 2019, 13 (3) : 575-596. doi: 10.3934/ipi.2019027

2020 Impact Factor: 1.916

Metrics

  • PDF downloads (167)
  • HTML views (77)
  • Cited by (0)

Other articles
by authors

[Back to Top]