3,609 research outputs found

    High resolution sparse estimation of exponentially decaying two-dimensional signals

    Get PDF
    In this work, we consider the problem of high-resolution estimation of the parameters detailing a two-dimensional (2-D) signal consisting of an unknown number of exponentially decaying sinusoidal components. Interpreting the estimation problem as a block (or group) sparse representation problem allows the decoupling of the 2-D data structure into a sum of outer-products of 1-D damped sinusoidal signals with unknown damping and frequency. The resulting non-zero blocks will represent each of the 1-D damped sinusoids, which may then be used as non-parametric estimates of the corresponding 1-D signals; this implies that the sought 2-D modes may be estimated using a sequence of 1-D optimization problems. The resulting sparse representation problem is solved using an iterative ADMM-based algorithm, after which the damping and frequency parameter can be estimated by a sequence of simple 1-D optimization problems

    Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing

    Get PDF
    This paper focuses on the estimation of low-complexity signals when they are observed through MM uniformly quantized compressive observations. Among such signals, we consider 1-D sparse vectors, low-rank matrices, or compressible signals that are well approximated by one of these two models. In this context, we prove the estimation efficiency of a variant of Basis Pursuit Denoise, called Consistent Basis Pursuit (CoBP), enforcing consistency between the observations and the re-observed estimate, while promoting its low-complexity nature. We show that the reconstruction error of CoBP decays like M−1/4M^{-1/4} when all parameters but MM are fixed. Our proof is connected to recent bounds on the proximity of vectors or matrices when (i) those belong to a set of small intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they share the same quantized (dithered) random projections. By solving CoBP with a proximal algorithm, we provide some extensive numerical observations that confirm the theoretical bound as MM is increased, displaying even faster error decay than predicted. The same phenomenon is observed in the special, yet important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency, error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this version: title change, typo corrections, clarification of the context, adding a comparison with BPD

    Deep Signal Recovery with One-Bit Quantization

    Full text link
    Machine learning, and more specifically deep learning, have shown remarkable performance in sensing, communications, and inference. In this paper, we consider the application of the deep unfolding technique in the problem of signal reconstruction from its one-bit noisy measurements. Namely, we propose a model-based machine learning method and unfold the iterations of an inference optimization algorithm into the layers of a deep neural network for one-bit signal recovery. The resulting network, which we refer to as DeepRec, can efficiently handle the recovery of high-dimensional signals from acquired one-bit noisy measurements. The proposed method results in an improvement in accuracy and computational efficiency with respect to the original framework as shown through numerical analysis.Comment: This paper has been submitted to the 44th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2019

    Super-Resolution of Mutually Interfering Signals

    Full text link
    We consider simultaneously identifying the membership and locations of point sources that are convolved with different low-pass point spread functions, from the observation of their superpositions. This problem arises in three-dimensional super-resolution single-molecule imaging, neural spike sorting, multi-user channel identification, among others. We propose a novel algorithm, based on convex programming, and establish its near-optimal performance guarantee for exact recovery by exploiting the sparsity of the point source model as well as incoherence between the point spread functions. Numerical examples are provided to demonstrate the effectiveness of the proposed approach.Comment: ISIT 201

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the â„“1\ell_1 norm and Log (â„“1\ell_1-â„“0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS

    Stable soft extrapolation of entire functions

    Full text link
    Soft extrapolation refers to the problem of recovering a function from its samples, multiplied by a fast-decaying window and perturbed by an additive noise, over an interval which is potentially larger than the essential support of the window. A core theoretical question is to provide bounds on the possible amount of extrapolation, depending on the sample perturbation level and the function prior. In this paper we consider soft extrapolation of entire functions of finite order and type (containing the class of bandlimited functions as a special case), multiplied by a super-exponentially decaying window (such as a Gaussian). We consider a weighted least-squares polynomial approximation with judiciously chosen number of terms and a number of samples which scales linearly with the degree of approximation. It is shown that this simple procedure provides stable recovery with an extrapolation factor which scales logarithmically with the perturbation level and is inversely proportional to the characteristic lengthscale of the function. The pointwise extrapolation error exhibits a H\"{o}lder-type continuity with an exponent derived from weighted potential theory, which changes from 1 near the available samples, to 0 when the extrapolation distance reaches the characteristic smoothness length scale of the function. The algorithm is asymptotically minimax, in the sense that there is essentially no better algorithm yielding meaningfully lower error over the same smoothness class. When viewed in the dual domain, the above problem corresponds to (stable) simultaneous de-convolution and super-resolution for objects of small space/time extent. Our results then show that the amount of achievable super-resolution is inversely proportional to the object size, and therefore can be significant for small objects
    • …
    corecore