4,168 research outputs found

    Nonlinear Analysis of Phase Retrieval and Deep Learning

    Get PDF
    Nonlinearity causes information loss. The phase retrieval problem, or the phaseless reconstruction problem, seeks to reconstruct a signal from the magnitudes of linear measurements. With a more complicated design, convolutional neural networks use nonlinearity to extract useful features. We can model both problems in a frame-theoretic setting. With the existence of a noise, it is important to study the stability of the phaseless reconstruction and the feature extraction part of the convolutional neural networks. We prove the Lipschitz properties in both cases. In the phaseless reconstruction problem, we show that phase retrievability implies a bi-Lipschitz reconstruction map, which can be extended to the Euclidean space to accommodate noises while remaining to be stable. In the deep learning problem, we set up a general framework for the convolutional neural networks and provide an approach for computing the Lipschitz constants

    Phase Harmonic Correlations and Convolutional Neural Networks

    Full text link
    A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters which are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors which are local in space, frequency and phase. The non-linear phase filter becomes a multiplicative operator over phase harmonics computed with a Fourier transform along the phase. We prove that it defines a bi-Lipschitz and invertible representation. The correlations of phase harmonics coefficients characterise coherent structures from their phase dependence across frequencies. For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representationComment: 26 pages, 8 figure

    On the Inductive Bias of Neural Tangent Kernels

    Get PDF
    State-of-the-art neural networks are heavily over-parameterized, making the optimization algorithm a crucial ingredient for learning predictive models with good generalization properties. A recent line of work has shown that in a certain over-parameterized regime, the learning dynamics of gradient descent are governed by a certain kernel obtained at initialization, called the neural tangent kernel. We study the inductive bias of learning in such a regime by analyzing this kernel and the corresponding function space (RKHS). In particular, we study smoothness, approximation, and stability properties of functions with finite norm, including stability to image deformations in the case of convolutional networks, and compare to other known kernels for similar architectures.Comment: NeurIPS 201

    NAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations

    Get PDF
    This paper introduces Non-Autonomous Input-Output Stable Network (NAIS-Net), a very deep architecture where each stacked processing block is derived from a time-invariant non-autonomous dynamical system. Non-autonomy is implemented by skip connections from the block input to each of the unrolled processing stages and allows stability to be enforced so that blocks can be unrolled adaptively to a pattern-dependent processing depth. NAIS-Net induces non-trivial, Lipschitz input-output maps, even for an infinite unroll length. We prove that the network is globally asymptotically stable so that for every initial condition there is exactly one input-dependent equilibrium assuming tanh units, and multiple stable equilibria for ReL units. An efficient implementation that enforces the stability under derived conditions for both fully-connected and convolutional layers is also presented. Experimental results show how NAIS-Net exhibits stability in practice, yielding a significant reduction in generalization gap compared to ResNets.Comment: NIPS 201
    • …
    corecore