14,189 research outputs found

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

    Get PDF
    The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms

    Local Kernels and the Geometric Structure of Data

    Full text link
    We introduce a theory of local kernels, which generalize the kernels used in the standard diffusion maps construction of nonparametric modeling. We prove that evaluating a local kernel on a data set gives a discrete representation of the generator of a continuous Markov process, which converges in the limit of large data. We explicitly connect the drift and diffusion coefficients of the process to the moments of the kernel. Moreover, when the kernel is symmetric, the generator is the Laplace-Beltrami operator with respect to a geometry which is influenced by the embedding geometry and the properties of the kernel. In particular, this allows us to generate any Riemannian geometry by an appropriate choice of local kernel. In this way, we continue a program of Belkin, Niyogi, Coifman and others to reinterpret the current diverse collection of kernel-based data analysis methods and place them in a geometric framework. We show how to use this framework to design local kernels invariant to various features of data. These data-driven local kernels can be used to construct conformally invariant embeddings and reconstruct global diffeomorphisms

    Time-causal and time-recursive spatio-temporal receptive fields

    Get PDF
    We present an improved model and theory for time-causal and time-recursive spatio-temporal receptive fields, based on a combination of Gaussian receptive fields over the spatial domain and first-order integrators or equivalently truncated exponential filters coupled in cascade over the temporal domain. Compared to previous spatio-temporal scale-space formulations in terms of non-enhancement of local extrema or scale invariance, these receptive fields are based on different scale-space axiomatics over time by ensuring non-creation of new local extrema or zero-crossings with increasing temporal scale. Specifically, extensions are presented about (i) parameterizing the intermediate temporal scale levels, (ii) analysing the resulting temporal dynamics, (iii) transferring the theory to a discrete implementation, (iv) computing scale-normalized spatio-temporal derivative expressions for spatio-temporal feature detection and (v) computational modelling of receptive fields in the lateral geniculate nucleus (LGN) and the primary visual cortex (V1) in biological vision. We show that by distributing the intermediate temporal scale levels according to a logarithmic distribution, we obtain much faster temporal response properties (shorter temporal delays) compared to a uniform distribution. Specifically, these kernels converge very rapidly to a limit kernel possessing true self-similar scale-invariant properties over temporal scales, thereby allowing for true scale invariance over variations in the temporal scale, although the underlying temporal scale-space representation is based on a discretized temporal scale parameter. We show how scale-normalized temporal derivatives can be defined for these time-causal scale-space kernels and how the composed theory can be used for computing basic types of scale-normalized spatio-temporal derivative expressions in a computationally efficient manner.Comment: 39 pages, 12 figures, 5 tables in Journal of Mathematical Imaging and Vision, published online Dec 201

    Kernel Analog Forecasting: Multiscale Test Problems

    Get PDF
    Data-driven prediction is becoming increasingly widespread as the volume of data available grows and as algorithmic development matches this growth. The nature of the predictions made, and the manner in which they should be interpreted, depends crucially on the extent to which the variables chosen for prediction are Markovian, or approximately Markovian. Multiscale systems provide a framework in which this issue can be analyzed. In this work kernel analog forecasting methods are studied from the perspective of data generated by multiscale dynamical systems. The problems chosen exhibit a variety of different Markovian closures, using both averaging and homogenization; furthermore, settings where scale-separation is not present and the predicted variables are non-Markovian, are also considered. The studies provide guidance for the interpretation of data-driven prediction methods when used in practice.Comment: 30 pages, 14 figures; clarified several ambiguous parts, added references, and a comparison with Lorenz' original method (Sec. 4.5

    Left-invariant evolutions of wavelet transforms on the Similitude Group

    Get PDF
    Enhancement of multiple-scale elongated structures in noisy image data is relevant for many biomedical applications but commonly used PDE-based enhancement techniques often fail at crossings in an image. To get an overview of how an image is composed of local multiple-scale elongated structures we construct a multiple scale orientation score, which is a continuous wavelet transform on the similitude group, SIM(2). Our unitary transform maps the space of images onto a reproducing kernel space defined on SIM(2), allowing us to robustly relate Euclidean (and scaling) invariant operators on images to left-invariant operators on the corresponding continuous wavelet transform. Rather than often used wavelet (soft-)thresholding techniques, we employ the group structure in the wavelet domain to arrive at left-invariant evolutions and flows (diffusion), for contextual crossing preserving enhancement of multiple scale elongated structures in noisy images. We present experiments that display benefits of our work compared to recent PDE techniques acting directly on the images and to our previous work on left-invariant diffusions on orientation scores defined on Euclidean motion group.Comment: 40 page
    • …
    corecore