108,816 research outputs found

    How well can we estimate a sparse vector?

    Get PDF
    The estimation of a sparse vector in the linear model is a fundamental problem in signal processing, statistics, and compressive sensing. This paper establishes a lower bound on the mean-squared error, which holds regardless of the sensing/design matrix being used and regardless of the estimation procedure. This lower bound very nearly matches the known upper bound one gets by taking a random projection of the sparse vector followed by an â„“1\ell_1 estimation procedure such as the Dantzig selector. In this sense, compressive sensing techniques cannot essentially be improved

    Bayesian Estimation for Continuous-Time Sparse Stochastic Processes

    Full text link
    We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. Our goal is to estimate the noiseless samples (denoising) and the signal in-between (interpolation problem). By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples and show how this probability density function can be factorized. The factorization enables us to tractably implement the maximum a posteriori and minimum mean-square error (MMSE) criteria as two statistical approaches for estimating the unknowns. We compare the derived statistical methods with well-known techniques for the recovery of sparse signals, such as the â„“1\ell_1 norm and Log (â„“1\ell_1-â„“0\ell_0 relaxation) regularization methods. The simulation results show that, under certain conditions, the performance of the regularization techniques can be very close to that of the MMSE estimator.Comment: To appear in IEEE TS

    Sparse Recovery via Differential Inclusions

    Full text link
    In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics. Our goal here is to bring this idea to address a challenging problem in statistics, \emph{i.e.} finding the oracle estimator which is unbiased and sign-consistent using dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman ISS}. A well-known shortcoming of LASSO and any convex regularization approaches lies in the bias of estimators. However, we show that under proper conditions, there exists a bias-free and sign-consistent point on the solution paths of such dynamics, which corresponds to a signal that is the unbiased estimate of the true signal and whose entries have the same signs as those of the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution paths are regularization paths better than the LASSO regularization path, since the points on the latter path are biased when sign-consistency is reached. We also show how to efficiently compute their solution paths in both continuous and discretized settings: the full solution paths can be exactly computed piece by piece, and a discretization leads to \emph{Linearized Bregman iteration}, which is a simple iterative thresholding rule and easy to parallelize. Theoretical guarantees such as sign-consistency and minimax optimal l2l_2-error bounds are established in both continuous and discrete settings for specific points on the paths. Early-stopping rules for identifying these points are given. The key treatment relies on the development of differential inequalities for differential inclusions and their discretizations, which extends the previous results and leads to exponentially fast recovering of sparse signals before selecting wrong ones.Comment: In Applied and Computational Harmonic Analysis, 201

    Sparse Coding and Automatic Relevance Determination for Multi-way models

    Get PDF
    International audienceMulti-way modeling has become an important tool in the analysis of large scale multi-modal data. An important class of multi-way models is given by the Tucker model which decomposes the data into components pertaining to each modality as well as a core array indicating how the components of the various modalities interact. Unfortunately, the Tucker model is not unique. Furthermore, establishing the adequate model order is difficult as the number of components are specified for each mode separately. Previously, rotation criteria such as VARIMAX has been used to resolve the non-uniqueness of the Tucker representation [7]. Furthermore, all potential models have been exhaustively evaluated to estimate the adequate number of components of each mode. We demonstrate how sparse coding can prune excess components and resolve the non-uniqueness of the Tucker model while Automatic Relevance Determination in Bayesian learning form a framework to learn the adequate degree of sparsity imposed. On a wide range of multi-way data sets the proposed method is demonstrated to successfully prune excess components thereby establishing the model order. Furthermore, the non-uniqueness of the Tucker model is resolved since among potential models the models giving the sparsest representation as measured by the sparse coding regularization is attained. The approach readily generalizes to regular sparse coding as well as the CandeComp/PARAFAC model as both models are special cases of the Tucker model

    Optical Flow From 1D Correlation: Application to a Simple Time-To-Crash Detector

    Get PDF
    In the first part of this paper we show that a new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute a satisfactory estimation of the optical flow field. The algorithm is well-suited to VLSI implementations. The sparse measurements provided by the technique can be used to compute qualitative properties of the flow for a number of different visual tsks. In particular, the second part of the paper shows how to combine our 1D correlation technique with a scheme for detecting expansion or rotation ([5]) in a simple algorithm which also suggests interesting biological implications. The algorithm provides a rough estimate of time-to-crash. It was tested on real image sequences. We show its performance and compare the results to previous approaches

    Estimating and Explaining Model Performance When Both Covariates and Labels Shift

    Full text link
    Deployed machine learning (ML) models often encounter new user data that differs from their training data. Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This is very challenging, however, as the data distribution can change in flexible ways, and we may not have any labels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which considers the joint shift of both labels and a few features. This unifies and generalizes several existing shift models including label shift and sparse covariate shift, where only marginal feature or label distribution shifts are considered. We describe mathematical conditions under which SJS is identifiable. We further propose SEES, an algorithmic framework to characterize the distribution shift under SJS and to estimate a model's performance on new data without any labels. We conduct extensive experiments on several real-world datasets with various ML models. Across different datasets and distribution shifts, SEES achieves significant (up to an order of magnitude) shift estimation error improvements over existing approaches.Comment: Accepted to NeurIPS 202

    Signal Space CoSaMP for Sparse Recovery with Redundant Dictionaries

    Get PDF
    Compressive sensing (CS) has recently emerged as a powerful framework for acquiring sparse signals. The bulk of the CS literature has focused on the case where the acquired signal has a sparse or compressible representation in an orthonormal basis. In practice, however, there are many signals that cannot be sparsely represented or approximated using an orthonormal basis, but that do have sparse representations in a redundant dictionary. Standard results in CS can sometimes be extended to handle this case provided that the dictionary is sufficiently incoherent or well-conditioned, but these approaches fail to address the case of a truly redundant or overcomplete dictionary. In this paper we describe a variant of the iterative recovery algorithm CoSaMP for this more challenging setting. We utilize the D-RIP, a condition on the sensing matrix analogous to the well-known restricted isometry property. In contrast to prior work, the method and analysis are "signal-focused"; that is, they are oriented around recovering the signal rather than its dictionary coefficients. Under the assumption that we have a near-optimal scheme for projecting vectors in signal space onto the model family of candidate sparse signals, we provide provable recovery guarantees. Developing a practical algorithm that can provably compute the required near-optimal projections remains a significant open problem, but we include simulation results using various heuristics that empirically exhibit superior performance to traditional recovery algorithms
    • …
    corecore