3,079 research outputs found

    Cholesky-factorized sparse Kernel in support vector machines

    Get PDF
    Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. The approach uses an RBF kernel with a sparse matrix which is factorized using Cholesky decomposition. The method is tested on large artificial and real data sets and compared to the standard RBF and linear kernels where both the accuracy and training time are reported. For most data sets, the result shows a huge training time reduction, over 90\%, whilst maintaining the accuracy

    On kernel engineering via Paley–Wiener

    Get PDF
    A radial basis function approximation takes the form s(x)=∑k=1nakϕ(x−bk),x∈Rd,s(x)=\sum_{k=1}^na_k\phi(x-b_k),\quad x\in {\mathbb{R}}^d, where the coefficients a 1,…,a n are real numbers, the centres b 1,…,b n are distinct points in ℝ d , and the function φ:ℝ d →ℝ is radially symmetric. Such functions are highly useful in practice and enjoy many beautiful theoretical properties. In particular, much work has been devoted to the polyharmonic radial basis functions, for which φ is the fundamental solution of some iterate of the Laplacian. In this note, we consider the construction of a rotation-invariant signed (Borel) measure μ for which the convolution ψ=μ φ is a function of compact support, and when φ is polyharmonic. The novelty of this construction is its use of the Paley–Wiener theorem to identify compact support via analysis of the Fourier transform of the new kernel ψ, so providing a new form of kernel engineering

    Stochastic expansions using continuous dictionaries: L\'{e}vy adaptive regression kernels

    Get PDF
    This article describes a new class of prior distributions for nonparametric function estimation. The unknown function is modeled as a limit of weighted sums of kernels or generator functions indexed by continuous parameters that control local and global features such as their translation, dilation, modulation and shape. L\'{e}vy random fields and their stochastic integrals are employed to induce prior distributions for the unknown functions or, equivalently, for the number of kernels and for the parameters governing their features. Scaling, shape, and other features of the generating functions are location-specific to allow quite different function properties in different parts of the space, as with wavelet bases and other methods employing overcomplete dictionaries. We provide conditions under which the stochastic expansions converge in specified Besov or Sobolev norms. Under a Gaussian error model, this may be viewed as a sparse regression problem, with regularization induced via the L\'{e}vy random field prior distribution. Posterior inference for the unknown functions is based on a reversible jump Markov chain Monte Carlo algorithm. We compare the L\'{e}vy Adaptive Regression Kernel (LARK) method to wavelet-based methods using some of the standard test functions, and illustrate its flexibility and adaptability in nonstationary applications.Comment: Published in at http://dx.doi.org/10.1214/11-AOS889 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Gagliardo-Nirenberg Inequalities for Differential Forms in Heisenberg Groups

    Full text link
    The L 1-Sobolev inequality states that the L n/(n--1)-norm of a compactly supported function on Euclidean n-space is controlled by the L 1-norm of its gradient. The generalization to differential forms (due to Lanzani & Stein and Bourgain & Brezis) is recent, and states that a the L n/(n--1)-norm of a compactly supported differential h-form is controlled by the L 1-norm of its exterior differential du and its exterior codifferential δ\deltau (in special cases the L 1-norm must be replaced by the H 1-Hardy norm). We shall extend this result to Heisenberg groups in the framework of an appropriate complex of differential forms

    On the spectral distribution of kernel matrices related to\ud radial basis functions

    Get PDF
    This paper focuses on the spectral distribution of kernel matrices related to radial basis functions. The asymptotic behaviour of eigenvalues of kernel matrices related to radial basis functions with different smoothness are studied. These results are obtained by estimated the coefficients of an orthogonal expansion of the underlying kernel function. Beside many other results, we prove that there are exactly (k+d−1/d-1) eigenvalues in the same order for analytic separable kernel functions like the Gaussian in Rd. This gives theoretical support for how to choose the diagonal scaling matrix in the RBF-QR method (Fornberg et al, SIAM J. Sci. Comput. (33), 2011) which can stably compute Gaussian radial basis function interpolants
    • …
    corecore