12,660 research outputs found

    Learning Sparse Wavelet Representations

    Full text link
    In this work we propose a method for learning wavelet filters directly from data. We accomplish this by framing the discrete wavelet transform as a modified convolutional neural network. We introduce an autoencoder wavelet transform network that is trained using gradient descent. We show that the model is capable of learning structured wavelet filters from synthetic and real data. The learned wavelets are shown to be similar to traditional wavelets that are derived using Fourier methods. Our method is simple to implement and easily incorporated into neural network architectures. A major advantage to our model is that we can learn from raw audio data.Comment: 7 pages, 5 figure

    On learning with shift-invariant structures

    Full text link
    We describe new results and algorithms for two different, but related, problems which deal with circulant matrices: learning shift-invariant components from training data and calculating the shift (or alignment) between two given signals. In the first instance, we deal with the shift-invariant dictionary learning problem while the latter bears the name of (compressive) shift retrieval. We formulate these problems using circulant and convolutional matrices (including unions of such matrices), define optimization problems that describe our goals and propose efficient ways to solve them. Based on these findings, we also show how to learn a wavelet-like dictionary from training data. We connect our work with various previous results from the literature and we show the effectiveness of our proposed algorithms using synthetic, ECG signals and images

    Cross-scale predictive dictionaries

    Full text link
    Sparse representations using data dictionaries provide an efficient model particularly for signals that do not enjoy alternate analytic sparsifying transformations. However, solving inverse problems with sparsifying dictionaries can be computationally expensive, especially when the dictionary under consideration has a large number of atoms. In this paper, we incorporate additional structure on to dictionary-based sparse representations for visual signals to enable speedups when solving sparse approximation problems. The specific structure that we endow onto sparse models is that of a multi-scale modeling where the sparse representation at each scale is constrained by the sparse representation at coarser scales. We show that this cross-scale predictive model delivers significant speedups, often in the range of 10-60×\times, with little loss in accuracy for linear inverse problems associated with images, videos, and light fields.Comment: 12 page

    An Adaptive Markov Random Field for Structured Compressive Sensing

    Full text link
    Exploiting intrinsic structures in sparse signals underpins the recent progress in compressive sensing (CS). The key for exploiting such structures is to achieve two desirable properties: generality (\ie, the ability to fit a wide range of signals with diverse structures) and adaptability (\ie, being adaptive to a specific signal). Most existing approaches, however, often only achieve one of these two properties. In this study, we propose a novel adaptive Markov random field sparsity prior for CS, which not only is able to capture a broad range of sparsity structures, but also can adapt to each sparse signal through refining the parameters of the sparsity prior with respect to the compressed measurements. To maximize the adaptability, we also propose a new sparse signal estimation where the sparse signals, support, noise and signal parameter estimation are unified into a variational optimization problem, which can be effectively solved with an alternative minimization scheme. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed method in recovery accuracy, noise tolerance, and runtime.Comment: 13 pages, submitted to IEEE Transactions on Image Processin

    Signal Representations on Graphs: Tools and Applications

    Full text link
    We present a framework for representing and modeling data on graphs. Based on this framework, we study three typical classes of graph signals: smooth graph signals, piecewise-constant graph signals, and piecewise-smooth graph signals. For each class, we provide an explicit definition of the graph signals and construct a corresponding graph dictionary with desirable properties. We then study how such graph dictionary works in two standard tasks: approximation and sampling followed with recovery, both from theoretical as well as algorithmic perspectives. Finally, for each class, we present a case study of a real-world problem by using the proposed methodology

    Graph Wavelets via Sparse Cuts: Extended Version

    Full text link
    Modeling information that resides on vertices of large graphs is a key problem in several real-life applications, ranging from social networks to the Internet-of-things. Signal Processing on Graphs and, in particular, graph wavelets can exploit the intrinsic smoothness of these datasets in order to represent them in a both compact and accurate manner. However, how to discover wavelet bases that capture the geometry of the data with respect to the signal as well as the graph structure remains an open question. In this paper, we study the problem of computing graph wavelet bases via sparse cuts in order to produce low-dimensional encodings of data-driven bases. This problem is connected to known hard problems in graph theory (e.g. multiway cuts) and thus requires an efficient heuristic. We formulate the basis discovery task as a relaxation of a vector optimization problem, which leads to an elegant solution as a regularized eigenvalue computation. Moreover, we propose several strategies in order to scale our algorithm to large graphs. Experimental results show that the proposed algorithm can effectively encode both the graph structure and signal, producing compressed and accurate representations for vertex values in a wide range of datasets (e.g. sensor and gene networks) and significantly outperforming the best baseline

    Understanding Deep Convolutional Networks

    Full text link
    Deep convolutional networks provide state of the art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and non-linearities. A mathematical framework is introduced to analyze their properties. Computations of invariants involve multiscale contractions, the linearization of hierarchical symmetries, and sparse separations. Applications are discussed.Comment: 17 pages, 4 Figure

    Multi-Focus Image Fusion Using Sparse Representation and Coupled Dictionary Learning

    Full text link
    We address the multi-focus image fusion problem, where multiple images captured with different focal settings are to be fused into an all-in-focus image of higher quality. Algorithms for this problem necessarily admit the source image characteristics along with focused and blurred features. However, most sparsity-based approaches use a single dictionary in focused feature space to describe multi-focus images, and ignore the representations in blurred feature space. We propose a multi-focus image fusion approach based on sparse representation using a coupled dictionary. It exploits the observations that the patches from a given training set can be sparsely represented by a couple of overcomplete dictionaries related to the focused and blurred categories of images and that a sparse approximation based on such coupled dictionary leads to a more flexible and therefore better fusion strategy than the one based on just selecting the sparsest representation in the original image estimate. In addition, to improve the fusion performance, we employ a coupled dictionary learning approach that enforces pairwise correlation between atoms of dictionaries learned to represent the focused and blurred feature spaces. We also discuss the advantages of the fusion approach based on coupled dictionary learning, and present efficient algorithms for fusion based on coupled dictionary learning. Extensive experimental comparisons with state-of-the-art multi-focus image fusion algorithms validate the effectiveness of the proposed approach.Comment: 25 pages, 15 figures, 2 tabl

    Superresolution of Noisy Remotely Sensed Images Through Directional Representations

    Full text link
    We develop an algorithm for single-image superresolution of remotely sensed data, based on the discrete shearlet transform. The shearlet transform extracts directional features of signals, and is known to provide near-optimally sparse representations for a broad class of images. This often leads to superior performance in edge detection and image representation when compared to isotropic frames. We justify the use of shearlets mathematically, before presenting a denoising single-image superresolution algorithm that combines the shearlet transform with sparse mixing estimators (SME). Our algorithm is compared with a variety of single-image superresolution methods, including wavelet SME superresolution. Our numerical results demonstrate competitive performance in terms of PSNR and SSIM.Comment: 5 pages (double column). IEEE copyright adde

    Overcomplete Frame Thresholding for Acoustic Scene Analysis

    Full text link
    In this work, we derive a generic overcomplete frame thresholding scheme based on risk minimization. Overcomplete frames being favored for analysis tasks such as classification, regression or anomaly detection, we provide a way to leverage those optimal representations in real-world applications through the use of thresholding. We validate the method on a large scale bird activity detection task via the scattering network architecture performed by means of continuous wavelets, known for being an adequate dictionary in audio environments
    corecore