• PDF
• Cite
• Share
Article Contents  Article Contents

# Sparse regularized learning in the reproducing kernel banach spaces with the $\ell^1$ norm

• * Corresponding author: Rongrong Lin
• We present a sparse representer theorem for regularization networks in a reproducing kernel Banach space with the $\ell^1$ norm by the theory of convex analysis. The theorem states that extreme points of the solution set of regularization networks in such a sparsity-promoting space belong to the span of kernel functions centered on at most $n$ adaptive points of the input space, where $n$ is the number of training data. Under the Lebesgue constant assumptions on reproducing kernels, we can recover the relaxed representer theorem and the exact representer theorem in that space in the literature. Finally, we perform numerical experiments for synthetic data and real-world benchmark data in the reproducing kernel Banach spaces with the $\ell^1$ norm and the reproducing kernel Hilbert spaces both with Laplacian kernels. The numerical performance demonstrates the advantages of sparse regularized learning.

Mathematics Subject Classification: Primary: 46E22, 62G08; Secondary: 68Q32.

 Citation: • • Figure 1.  Numerical results of models (12) and (13) for Tai Chi data set are illustrated in Figure 1(a) and Figure 1(b), respectively.

Figure 2.  Numerical results of models (12) and (13) for the second data set are illustrated in Figure 2(a) and Figure 2(b), respectively

Table 1.  Lebesgue constants of the Laplacian kernel $e^{-\|x-x'\|_2}$ on a set of $n$ grid points of $[-1, 1]^2$

 n=100 n=400 n=900 n=1600 n=2500 1.237204 1.244770 1.249653 1.246516 1.246808

Table 2.  Lebesgue constants of the Laplacian kernel $e^{-\|x-x'\|_2}$, $x, x'\in{\mathbb R}^3$ on a set of $n$ grid points of $[-1, 1]^3$

 n=125 n=1000 n=1728 n=2197 n=3375 1.624012 1.705158 1.709775 1.711243 1.712867
• Figures(2)

Tables(2)

## Article Metrics  DownLoad:  Full-Size Img  PowerPoint