5,520 research outputs found

    Fast space-variant elliptical filtering using box splines

    Get PDF
    The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based on a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using pre-integration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based on the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally, the quasi-separable structure, along with a certain scaling property of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O(1) computations per pixel irrespective of the shape and size of the filter.Comment: 12 figures; IEEE Transactions on Image Processing, vol. 19, 201

    A representer theorem for deep kernel learning

    Full text link
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods

    Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

    Full text link
    How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n−βn^{-\beta} where nn is the number of training examples and β\beta an exponent that depends on both data and algorithm. In this work we measure β\beta when applying kernel methods to real datasets. For MNIST we find β≈0.4\beta\approx 0.4 and for CIFAR10 β≈0.1\beta\approx 0.1, for both regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we study the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption -- namely that the data are sampled from a regular lattice -- we derive analytically β\beta for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, β\beta depends only on the smoothness and dimension of the training data. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, the test error is found to be controlled by the magnitude of the projection of the true function on the kernel eigenvectors whose rank is larger than nn. Using this idea we predict relate the exponent β\beta to an exponent aa describing how the coefficients of the true function in the eigenbasis of the kernel decay with rank. We extract aa from real data by performing kernel PCA, leading to β≈0.36\beta\approx0.36 for MNIST and β≈0.07\beta\approx0.07 for CIFAR10, in good agreement with observations. We argue that these rather large exponents are possible due to the small effective dimension of the data.Comment: We added (i) the prediction of the exponent β\beta for real data using kernel PCA; (ii) the generalization of our results to non-Gaussian data from reference [11] (Bordelon et al., "Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks"

    Improvements on "Fast space-variant elliptical filtering using box splines"

    Full text link
    It is well-known that box filters can be efficiently computed using pre-integrations and local finite-differences [Crow1984,Heckbert1986,Viola2001]. By generalizing this idea and by combining it with a non-standard variant of the Central Limit Theorem, a constant-time or O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform space-variant filtering using Gaussian-like kernels. The algorithm was based on the observation that both isotropic and anisotropic Gaussians could be approximated using certain bivariate splines called box splines. The attractive feature of the algorithm was that it allowed one to continuously control the shape and size (covariance) of the filter, and that it had a fixed computational cost per pixel, irrespective of the size of the filter. The algorithm, however, offered a limited control on the covariance and accuracy of the Gaussian approximation. In this work, we propose some improvements by appropriately modifying the algorithm in [Chaudhury2010].Comment: 7 figure

    Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations

    Full text link
    In this work, we apply stochastic collocation methods with radial kernel basis functions for an uncertainty quantification of the random incompressible two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase Navier-Stokes equation for each given realization. We are able to empirically show that the resulting kernel-based stochastic collocation is highly competitive in this setting and even outperforms some other standard methods
    • …
    corecore