175 research outputs found

    The fast Fourier Transform and fast Wavelet Transform for Patterns on the Torus

    Full text link
    We introduce a fast Fourier transform on regular d-dimensional lattices. We investigate properties of congruence class representants, i.e. their ordering, to classify directions and derive a Cooley-Tukey-Algorithm. Despite the fast Fourier techniques itself, there is also the advantage of this transform to be parallelized efficiently, yielding faster versions than the one-dimensional Fourier transform. These properties of the lattice can further be used to perform a fast multivariate wavelet decomposition, where the wavelets are given as trigonometric polynomials. Furthermore the preferred directions of the decomposition itself can be characterised.Comment: 23 pages, 10 figures, revised versio

    A geometric analysis of subspace clustering with outliers

    Full text link
    This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower-dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [In IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 (2009) 2790-2797. IEEE], which significantly broadens the range of problems where it is provably effective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the effectiveness of these methods.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1034 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery

    Full text link
    The practice of deep learning has shown that neural networks generalize remarkably well even with an extreme number of learned parameters. This appears to contradict traditional statistical wisdom, in which a trade-off between model complexity and fit to the data is essential. We set out to resolve this discrepancy from a convex optimization and sparse recovery perspective. We consider the training and generalization properties of two-layer ReLU networks with standard weight decay regularization. Under certain regularity assumptions on the data, we show that ReLU networks with an arbitrary number of parameters learn only simple models that explain the data. This is analogous to the recovery of the sparsest linear model in compressed sensing. For ReLU networks and their variants with skip connections or normalization layers, we present isometry conditions that ensure the exact recovery of planted neurons. For randomly generated data, we show the existence of a phase transition in recovering planted neural network models. The situation is simple: whenever the ratio between the number of samples and the dimension exceeds a numerical threshold, the recovery succeeds with high probability; otherwise, it fails with high probability. Surprisingly, ReLU networks learn simple and sparse models even when the labels are noisy. The phase transition phenomenon is confirmed through numerical experiments

    Efficient estimation of nearly sparse many-body quantum Hamiltonians

    Full text link
    We develop an efficient and robust approach to Hamiltonian identification for multipartite quantum systems based on the method of compressed sensing. This work demonstrates that with only O(s log(d)) experimental configurations, consisting of random local preparations and measurements, one can estimate the Hamiltonian of a d-dimensional system, provided that the Hamiltonian is nearly s-sparse in a known basis. We numerically simulate the performance of this algorithm for three- and four-body interactions in spin-coupled quantum dots and atoms in optical lattices. Furthermore, we apply the algorithm to characterize Hamiltonian fine structure and unknown system-bath interactions.Comment: 8 pages, 2 figures. Title is changed. Detailed error analysis is added. Figures are updated with additional clarifying discussion

    Shearlets and Optimally Sparse Approximations

    Full text link
    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations of such functions. Recently, cartoon-like images were introduced in 2D and 3D as a suitable model class, and approximation properties were measured by considering the decay rate of the L2L^2 error of the best NN-term approximation. Shearlet systems are to date the only representation system, which provide optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field.Comment: in "Shearlets: Multiscale Analysis for Multivariate Data", Birkh\"auser-Springe

    High-dimensional wave atoms and compression of seismic datasets

    Get PDF
    Wave atoms are a low-redundancy alternative to curvelets, suitable for high-dimensional seismic data processing. This abstract extends the wave atom orthobasis construction to 3D, 4D, and 5D Cartesian arrays, and parallelizes it in a shared-memory environment. An implementation of the algorithm for NVIDIA CUDA capable graphics processing units (GPU) is also developed to accelerate computation for 2D and 3D data. The new transforms are benchmarked against the Fourier transform for compression of data generated from synthetic 2D and 3D acoustic models.National Science Foundation (U.S.); Alfred P. Sloan Foundatio

    Unmixing multitemporal hyperspectral images accounting for smooth and abrupt variations

    Get PDF
    A classical problem in hyperspectral imaging, referred to as hyperspectral unmixing, consists in estimating spectra associated with each material present in an image and their proportions in each pixel. In practice, illumination variations (e.g., due to declivity or complex interactions with the observed materials) and the possible presence of outliers can result in significant changes in both the shape and the amplitude of the measurements, thus modifying the extracted signatures. In this context, sequences of hyperspectral images are expected to be simultaneously affected by such phenomena when acquired on the same area at different time instants. Thus, we propose a hierarchical Bayesian model to simultaneously account for smooth and abrupt spectral variations affecting a set of multitemporal hyperspectral images to be jointly unmixed. This model assumes that smooth variations can be interpreted as the result of endmember variability, whereas abrupt variations are due to significant changes in the imaged scene (e.g., presence of outliers, additional endmembers, etc.). The parameters of this Bayesian model are estimated using samples generated by a Gibbs sampler according to its posterior. Performance assessment is conducted on synthetic data in comparison with state-of-the-art unmixing methods

    Beyond convergence rates: Exact recovery with Tikhonov regularization with sparsity constraints

    Full text link
    The Tikhonov regularization of linear ill-posed problems with an â„“1\ell^1 penalty is considered. We recall results for linear convergence rates and results on exact recovery of the support. Moreover, we derive conditions for exact support recovery which are especially applicable in the case of ill-posed problems, where other conditions, e.g. based on the so-called coherence or the restricted isometry property are usually not applicable. The obtained results also show that the regularized solutions do not only converge in the â„“1\ell^1-norm but also in the vector space â„“0\ell^0 (when considered as the strict inductive limit of the spaces Rn\R^n as nn tends to infinity). Additionally, the relations between different conditions for exact support recovery and linear convergence rates are investigated. With an imaging example from digital holography the applicability of the obtained results is illustrated, i.e. that one may check a priori if the experimental setup guarantees exact recovery with Tikhonov regularization with sparsity constraints
    • …
    corecore