1,157 research outputs found

    Shearlets and Optimally Sparse Approximations

    Full text link
    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations of such functions. Recently, cartoon-like images were introduced in 2D and 3D as a suitable model class, and approximation properties were measured by considering the decay rate of the L2L^2 error of the best NN-term approximation. Shearlet systems are to date the only representation system, which provide optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field.Comment: in "Shearlets: Multiscale Analysis for Multivariate Data", Birkh\"auser-Springe

    Yet another breakdown point notion: EFSBP - illustrated at scale-shape models

    Full text link
    The breakdown point in its different variants is one of the central notions to quantify the global robustness of a procedure. We propose a simple supplementary variant which is useful in situations where we have no obvious or only partial equivariance: Extending the Donoho and Huber(1983) Finite Sample Breakdown Point, we propose the Expected Finite Sample Breakdown Point to produce less configuration-dependent values while still preserving the finite sample aspect of the former definition. We apply this notion for joint estimation of scale and shape (with only scale-equivariance available), exemplified for generalized Pareto, generalized extreme value, Weibull, and Gamma distributions. In these settings, we are interested in highly-robust, easy-to-compute initial estimators; to this end we study Pickands-type and Location-Dispersion-type estimators and compute their respective breakdown points.Comment: 21 pages, 4 figure

    On the Doubly Sparse Compressed Sensing Problem

    Full text link
    A new variant of the Compressed Sensing problem is investigated when the number of measurements corrupted by errors is upper bounded by some value l but there are no more restrictions on errors. We prove that in this case it is enough to make 2(t+l) measurements, where t is the sparsity of original data. Moreover for this case a rather simple recovery algorithm is proposed. An analog of the Singleton bound from coding theory is derived what proves optimality of the corresponding measurement matrices.Comment: 6 pages, IMACC2015 (accepted

    Necessary and sufficient conditions of solution uniqueness in 1\ell_1 minimization

    Full text link
    This paper shows that the solutions to various convex 1\ell_1 minimization problems are \emph{unique} if and only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as well as other 1\ell_1 models that either minimize f(Axb)f(Ax-b) or impose the constraint f(Axb)σf(Ax-b)\leq\sigma, where ff is a strictly convex function. For these models, this paper proves that, given a solution xx^* and defining I=\supp(x^*) and s=\sign(x^*_I), xx^* is the unique solution if and only if AIA_I has full column rank and there exists yy such that AITy=sA_I^Ty=s and aiTy<1|a_i^Ty|_\infty<1 for i∉Ii\not\in I. This condition is previously known to be sufficient for the basis pursuit model to have a unique solution supported on II. Indeed, it is also necessary, and applies to a variety of other 1\ell_1 models. The paper also discusses ways to recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte

    lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers

    Full text link
    We assume data sampled from a mixture of d-dimensional linear subspaces with spherically symmetric distributions within each subspace and an additional outlier component with spherically symmetric distribution within the ambient space (for simplicity we may assume that all distributions are uniform on their corresponding unit spheres). We also assume mixture weights for the different components. We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces. We study the recovery of the most significant subspace by minimizing the lp-averaged distances of data points from d-dimensional subspaces, where p>0. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p<=1, then for any fraction of outliers the most significant subspace can be recovered by lp minimization with overwhelming probability (which depends on the generating distribution and its parameters). We show that when adding small noise around the underlying subspaces the most significant subspace can be nearly recovered by lp minimization for any 0<p<=1 with an error proportional to the noise level. On the other hand, if p>1 and there is more than one underlying subspace, then with overwhelming probability the most significant subspace cannot be recovered or nearly recovered. This last result does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1 and for estimates relying on it), asymptotic dependence of probabilities and constants on D and d and further clarifications; for simplicity it assumes uniform distributions on spheres. V4: minor revision for the published versio

    Super-resolution far-field ghost imaging via compressive sampling

    Full text link
    Much more image details can be resolved by improving the system's imaging resolution and enhancing the resolution beyond the system's Rayleigh diffraction limit is generally called super-resolution. By combining the sparse prior property of images with the ghost imaging method, we demonstrated experimentally that super-resolution imaging can be nonlocally achieved in the far field even without looking at the object. Physical explanation of super-resolution ghost imaging via compressive sampling and its potential applications are also discussed.Comment: 4pages,4figure

    Non-Redundant Spectral Dimensionality Reduction

    Full text link
    Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the "repeated Eigen-directions" phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks

    Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation

    Full text link
    In this paper, we present the optimization formulation of the Kalman filtering and smoothing problems, and use this perspective to develop a variety of extensions and applications. We first formulate classic Kalman smoothing as a least squares problem, highlight special structure, and show that the classic filtering and smoothing algorithms are equivalent to a particular algorithm for solving this problem. Once this equivalence is established, we present extensions of Kalman smoothing to systems with nonlinear process and measurement models, systems with linear and nonlinear inequality constraints, systems with outliers in the measurements or sudden changes in the state, and systems where the sparsity of the state sequence must be accounted for. All extensions preserve the computational efficiency of the classic algorithms, and most of the extensions are illustrated with numerical examples, which are part of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure

    Towards a comprehensive evaluation of ultrasound speckle reduction

    Get PDF
    Over the last three decades, several despeckling filters have been developed to reduce the speckle noise inherently present in ultrasound images without losing the diagnostic information. In this paper, a new intensity and feature preservation evaluation metric for full speckle reduction evaluation is proposed based contrast and feature similarities. A comparison of the despeckling methods is done, using quality metrics and visual interpretation of images profiles to evaluate their performance and show the benefits each one can contribute to noise reduction and feature preservation. To test the methods, noise-free images and simulated B-mode ultrasound images are used. This way, the despeckling techniques can be compared using numeric metrics, taking the noise-free image as a reference. In this study, a total of seventeen different speckle reduction algorithms have been documented based on adaptive filtering, diffusion filtering and wavelet filtering, with sixteen qualitative metrics estimation.info:eu-repo/semantics/publishedVersio

    Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction

    Full text link
    It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square problem and thus the least angle regression (LARS) (Efron et al. \cite{LARS}), one of the most popular algorithms in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction. By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the over-fitting problem, and 5) the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms.Comment: 33 pages, 12 figure
    corecore