418 research outputs found

    Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

    Get PDF
    Suppose we are given a vector ff in RN\R^N. How many linear measurements do we need to make about ff to be able to recover ff to within precision ϵ\epsilon in the Euclidean (2\ell_2) metric? Or more exactly, suppose we are interested in a class F{\cal F} of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ϵ\epsilon? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal fFf \in {\cal F} decay like a power-law (or if the coefficient sequence of ff in a fixed basis decays like a power-law), then it is possible to reconstruct ff to within very high accuracy from a small number of random measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been corrected; other expository and bibliographical changes made, incorporating referee's suggestion

    Computation and Learning in High Dimensions (hybrid meeting)

    Get PDF
    The most challenging problems in science often involve the learning and accurate computation of high dimensional functions. High-dimensionality is a typical feature for a multitude of problems in various areas of science. The so-called curse of dimensionality typically negates the use of traditional numerical techniques for the solution of high-dimensional problems. Instead, novel theoretical and computational approaches need to be developed to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex problem-inherent structures and developing rigorous models to quantify the quality of information in a high-dimensional setting pose challenging tasks from both theoretical and numerical perspective. This has led to the emergence of several new computational methodologies, accounting for the fact that by now well understood methods drawing on spatial localization and mesh-refinement are in their original form no longer viable. Common to these approaches is the nonlinearity of the solution method. For certain problem classes, these methods have drastically advanced the frontiers of computability. The most visible of these new methods is deep learning. Although the use of deep neural networks has been extremely successful in certain application areas, their mathematical understanding is far from complete. This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems

    Feature-based time-series analysis

    Full text link
    This work presents an introduction to feature-based time-series analysis. The time series as a data type is first described, along with an overview of the interdisciplinary time-series analysis literature. I then summarize the range of feature-based representations for time series that have been developed to aid interpretable insights into time-series structure. Particular emphasis is given to emerging research that facilitates wide comparison of feature-based representations that allow us to understand the properties of a time-series dataset that make it suited to a particular feature-based representation or analysis algorithm. The future of time-series analysis is likely to embrace approaches that exploit machine learning methods to partially automate human learning to aid understanding of the complex dynamical patterns in the time series we measure from the world.Comment: 28 pages, 9 figure

    Data compression and harmonic analysis

    Get PDF
    In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’
    corecore