13 research outputs found

    On invariance functions in Relativity Theory

    Get PDF
    A new result for equivariant functions in terms of invariant functions in the case of Minkowski space is given. This generalizes the work of Hall and Wightman in the sense that only equivariance is required. In particular, it implies the possibility of defining physical magnitudes independently of the choice of the coordinate system, like the center of mass for relativistic particles

    Some covariance models based on normal scale mixtures

    Full text link
    Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ226 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Interpolation with uncoupled separable matrix-valued kernels

    Get PDF
    In this paper we consider the problem of approximating vector-valued functions over a domain Ω. For this purpose, we use matrix-valued reproducing kernels, which can be related to Reproducing kernel Hilbert spaces of vectorial functions and which can be viewed as an extension of the scalar-valued case. These spaces seem promising, when modelling correlations between the target function components, as the components are not learned independently of each other. We focus on the interpolation with such matrix-valued kernels. We derive error bounds for the interpolation error in terms of a generalized power-function and we introduce a subclass of matrix-valued kernels whose power-functions can be traced back to the power-function of scalar-valued reproducing kernels. Finally, we apply these kind of kernels to some artificial data to illustrate the benefit of interpolation with matrix-valued kernels in comparison to a componentwise approach

    Neural Fourier Transform: A General Approach to Equivariant Representation Learning

    Full text link
    Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role. However, most of the current studies are built on architectural theory and corresponding assumptions on the form of data. We propose Neural Fourier Transform (NFT), a general framework of learning the latent linear action of the group without assuming explicit knowledge of how the group acts on data. We present the theoretical foundations of NFT and show that the existence of a linear equivariant feature, which has been assumed ubiquitously in equivariance learning, is equivalent to the existence of a group invariant kernel on the dataspace. We also provide experimental results to demonstrate the application of NFT in typical scenarios with varying levels of knowledge about the acting group

    A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning

    Get PDF
    This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) framework for the problem of learning an unknown functional dependency between a structured input space and a structured output space. Our formulation encompasses both Vector-valued Manifold Regularization and Co-regularized Multi-view Learning, providing in particular a unifying framework linking these two important learning approaches. In the case of the least square loss function, we provide a closed form solution, which is obtained by solving a system of linear equations. In the case of Support Vector Machine (SVM) classification, our formulation generalizes in particular both the binary Laplacian SVM to the multi-class, multi-view settings and the multi-class Simplex Cone SVM to the semi-supervised, multi-view settings. The solution is obtained by solving a single quadratic optimization problem, as in standard SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results obtained on the task of object recognition, using several challenging datasets, demonstrate the competitiveness of our algorithms compared with other state-of-the-art methods.Comment: 72 page

    Do ideas have shape? Plato's theory of forms as the continuous limit of artificial neural networks

    Get PDF
    We show that ResNets converge, in the infinite depth limit, to a generalization of image registration algorithms. In this generalization, images are replaced by abstractions (ideas) living in high dimensional RKHS spaces, and material points are replaced by data points. Whereas computational anatomy aligns images via deformations of the material space, this generalization aligns ideas by via transformations of their RKHS. This identification of ResNets as idea registration algorithms has several remarkable consequences. The search for good architectures can be reduced to that of good kernels, and we show that the composition of idea registration blocks with reduced equivariant multi-channel kernels (introduced here) recovers and generalizes CNNs to arbitrary spaces and groups of transformations. Minimizers of L2L_2 regularized ResNets satisfy a discrete least action principle implying the near preservation of the norm of weights and biases across layers. The parameters of trained ResNets can be identified as solutions of an autonomous Hamiltonian system defined by the activation function and the architecture of the ANN. Momenta variables provide a sparse representation of the parameters of a ResNet. The registration regularization strategy provides a provably robust alternative to Dropout for ANNs. Pointwise RKHS error estimates lead to deterministic error estimates for ANNs.Comment: 56 page

    Do ideas have shape? Plato's theory of forms as the continuous limit of artificial neural networks

    Get PDF
    We show that ResNets converge, in the infinite depth limit, to a generalization of image registration algorithms. In this generalization, images are replaced by abstractions (ideas) living in high dimensional RKHS spaces, and material points are replaced by data points. Whereas computational anatomy aligns images via deformations of the material space, this generalization aligns ideas by via transformations of their RKHS. This identification of ResNets as idea registration algorithms has several remarkable consequences. The search for good architectures can be reduced to that of good kernels, and we show that the composition of idea registration blocks with reduced equivariant multi-channel kernels (introduced here) recovers and generalizes CNNs to arbitrary spaces and groups of transformations. Minimizers of L2 regularized ResNets satisfy a discrete least action principle implying the near preservation of the norm of weights and biases across layers. The parameters of trained ResNets can be identified as solutions of an autonomous Hamiltonian system defined by the activation function and the architecture of the ANN. Momenta variables provide a sparse representation of the parameters of a ResNet. The registration regularization strategy provides a provably robust alternative to Dropout for ANNs. Pointwise RKHS error estimates lead to deterministic error estimates for ANNs

    A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning

    Get PDF
    This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) framework for the problem of learning an unknown functional dependency between a structured input space and a structured output space. Our formulation encompasses both Vector-valued Manifold Regularization and Co-regularized Multi-view Learning, providing in particular a unifying framework linking these two important learning approaches. In the case of the least square loss function, we provide a closed form solution, which is obtained by solving a system of linear equations. In the case of Support Vector Machine (SVM) classi fi cation, our formulation generalizes in particular both the binary Laplacian SVM to the multi-class, multi-view settings and the multi-class Simplex Cone SVM to the semisupervised, multi-view settings. The solution is obtained by solving a single quadratic optimization problem, as in standard SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results obtained on the task of object recognition, using several challenging data sets, demonstrate the competitiveness of our algorithms compared with other state-of-the-art methods
    corecore