4,363 research outputs found

    A recursive way for sparse reconstruction of parametric spaces

    Get PDF
    A novel recursive framework for sparse reconstruction of continuous parameter spaces is proposed by adaptive partitioning and discretization of the parameter space together with expectation maximization type iterations. Any sparse solver or reconstruction technique can be used within the proposed recursive framework. Experimental results show that proposed technique improves the parameter estimation performance of classical sparse solvers while achieving Cramér-Rao lower bound on the tested frequency estimation problem. © 2014 IEEE

    Deep Learning for Single Image Super-Resolution: A Brief Review

    Get PDF
    Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM

    Simultaneous Codeword Optimization (SimCO) for Dictionary Update and Learning

    Get PDF
    We consider the data-driven dictionary learning problem. The goal is to seek an over-complete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse coding and dictionary update. In the literature, there are two benchmark mechanisms to update a dictionary. The first approach, such as the MOD algorithm, is characterized by searching for the optimal codewords while fixing the sparse coefficients. In the second approach, represented by the K-SVD method, one codeword and the related sparse coefficients are simultaneously updated while all other codewords and coefficients remain unchanged. We propose a novel framework that generalizes the aforementioned two methods. The unique feature of our approach is that one can update an arbitrary set of codewords and the corresponding sparse coefficients simultaneously: when sparse coefficients are fixed, the underlying optimization problem is similar to that in the MOD algorithm; when only one codeword is selected for update, it can be proved that the proposed algorithm is equivalent to the K-SVD method; and more importantly, our method allows us to update all codewords and all sparse coefficients simultaneously, hence the term simultaneous codeword optimization (SimCO). Under the proposed framework, we design two algorithms, namely, primitive and regularized SimCO. We implement these two algorithms based on a simple gradient descent mechanism. Simulations are provided to demonstrate the performance of the proposed algorithms, as compared with two baseline algorithms MOD and K-SVD. Results show that regularized SimCO is particularly appealing in terms of both learning performance and running speed.Comment: 13 page

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    Sampling and Super-resolution of Sparse Signals Beyond the Fourier Domain

    Full text link
    Recovering a sparse signal from its low-pass projections in the Fourier domain is a problem of broad interest in science and engineering and is commonly referred to as super-resolution. In many cases, however, Fourier domain may not be the natural choice. For example, in holography, low-pass projections of sparse signals are obtained in the Fresnel domain. Similarly, time-varying system identification relies on low-pass projections on the space of linear frequency modulated signals. In this paper, we study the recovery of sparse signals from low-pass projections in the Special Affine Fourier Transform domain (SAFT). The SAFT parametrically generalizes a number of well known unitary transformations that are used in signal processing and optics. In analogy to the Shannon's sampling framework, we specify sampling theorems for recovery of sparse signals considering three specific cases: (1) sampling with arbitrary, bandlimited kernels, (2) sampling with smooth, time-limited kernels and, (3) recovery from Gabor transform measurements linked with the SAFT domain. Our work offers a unifying perspective on the sparse sampling problem which is compatible with the Fourier, Fresnel and Fractional Fourier domain based results. In deriving our results, we introduce the SAFT series (analogous to the Fourier series) and the short time SAFT, and study convolution theorems that establish a convolution--multiplication property in the SAFT domain.Comment: 42 pages, 3 figures, manuscript under revie
    corecore