202 research outputs found

    Verifiable conditions of 1\ell_1-recovery of sparse signals with sign restrictions

    Full text link
    We propose necessary and sufficient conditions for a sensing matrix to be "s-semigood" -- to allow for exact 1\ell_1-recovery of sparse signals with at most ss nonzero entries under sign restrictions on part of the entries. We express the error bounds for imperfect 1\ell_1-recovery in terms of the characteristics underlying these conditions. Furthermore, we demonstrate that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-semigood. We concentrate on the properties of proposed verifiable sufficient conditions of ss-semigoodness and describe their limits of performance

    The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices

    Full text link
    This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the 1\ell^1-norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms achieve higher precision, yet being less storage/memory demanding. We also show that the ALM technique can be used to solve the (related but somewhat simpler) matrix completion problem and obtain rather promising results too. We further prove the necessary and sufficient condition for the inexact ALM to converge globally. Matlab code of all algorithms discussed are available at http://perception.csl.illinois.edu/matrix-rank/home.htmlComment: Please cite "Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized Alternating Direction Method with Adaptive Penalty for Low Rank Representation, NIPS 2011." (available at arXiv:1109.0367) instead for a more general method called Linearized Alternating Direction Method This manuscript first appeared as University of Illinois at Urbana-Champaign technical report #UILU-ENG-09-2215 in October 2009 Zhouchen Lin, Risheng Liu, and Zhixun Su, Linearized Alternating Direction Method with Adaptive Penalty for Low Rank Representation, NIPS 2011. (available at http://arxiv.org/abs/1109.0367

    Necessary and sufficient conditions of solution uniqueness in 1\ell_1 minimization

    Full text link
    This paper shows that the solutions to various convex 1\ell_1 minimization problems are \emph{unique} if and only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as well as other 1\ell_1 models that either minimize f(Axb)f(Ax-b) or impose the constraint f(Axb)σf(Ax-b)\leq\sigma, where ff is a strictly convex function. For these models, this paper proves that, given a solution xx^* and defining I=\supp(x^*) and s=\sign(x^*_I), xx^* is the unique solution if and only if AIA_I has full column rank and there exists yy such that AITy=sA_I^Ty=s and aiTy<1|a_i^Ty|_\infty<1 for i∉Ii\not\in I. This condition is previously known to be sufficient for the basis pursuit model to have a unique solution supported on II. Indeed, it is also necessary, and applies to a variety of other 1\ell_1 models. The paper also discusses ways to recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte

    Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation

    Full text link
    In this paper, we present the optimization formulation of the Kalman filtering and smoothing problems, and use this perspective to develop a variety of extensions and applications. We first formulate classic Kalman smoothing as a least squares problem, highlight special structure, and show that the classic filtering and smoothing algorithms are equivalent to a particular algorithm for solving this problem. Once this equivalence is established, we present extensions of Kalman smoothing to systems with nonlinear process and measurement models, systems with linear and nonlinear inequality constraints, systems with outliers in the measurements or sudden changes in the state, and systems where the sparsity of the state sequence must be accounted for. All extensions preserve the computational efficiency of the classic algorithms, and most of the extensions are illustrated with numerical examples, which are part of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure

    Revisiting Synthesis Model of Sparse Audio Declipper

    Full text link
    The state of the art in audio declipping has currently been achieved by SPADE (SParse Audio DEclipper) algorithm by Kiti\'c et al. Until now, the synthesis/sparse variant, S-SPADE, has been considered significantly slower than its analysis/cosparse counterpart, A-SPADE. It turns out that the opposite is true: by exploiting a recent projection lemma, individual iterations of both algorithms can be made equally computationally expensive, while S-SPADE tends to require considerably fewer iterations to converge. In this paper, the two algorithms are compared across a range of parameters such as the window length, window overlap and redundancy of the transform. The experiments show that although S-SPADE typically converges faster, the average performance in terms of restoration quality is not superior to A-SPADE

    Non-identical smoothing operators for estimating time-frequency interdependence in electrophysiological recordings

    Get PDF
    Synchronization of neural activity from distant parts of the brain is crucial for the coordination of cognitive activities. Because neural synchronization varies both in time and frequency, time–frequency (T-F) coherence is commonly employed to assess interdependences in electrophysiological recordings. T-F coherence entails smoothing the cross and power spectra to ensure statistical consistency of the estimate, which reduces its T-F resolution. This trade-off has been described in detail when the cross and power spectra are smoothed using identical smoothing operators, which may yield spurious coherent frequencies. In this article, we examine the use of non-identical smoothing operators for the estimation of T-F interdependence, i.e., phase synchronization is characterized by phase locking between signals captured by the cross spectrum and we may hence improve the trade-off by selectively smoothing the auto spectra. We first show that the frequency marginal density of the present estimate is bound within [0,1] when using non-identical smoothing operators. An analytic calculation of the bias and variance of present estimators is performed and compared with the bias and variance of standard T-F coherence using Monte Carlo simulations. We then test the use of non-identical smoothing operators on simulated data, whose T-F properties are known through construction. Finally, we analyze empirical data from eyes-closed surface electroencephalography recorded in human subjects to investigate alpha-band synchronization. These analyses show that selectively smoothing the auto spectra reduces the bias of the estimator and may improve the detection of T-F interdependence in electrophysiological data at high temporal resolution

    Evidence for sparse synergies in grasping actions

    Get PDF
    Converging evidence shows that hand-actions are controlled at the level of synergies and not single muscles. One intriguing aspect of synergy-based action-representation is that it may be intrinsically sparse and the same synergies can be shared across several distinct types of hand-actions. Here, adopting a normative angle, we consider three hypotheses for hand-action optimal-control: sparse-combination hypothesis (SC) – sparsity in the mapping between synergies and actions - i.e., actions implemented using a sparse combination of synergies; sparse-elements hypothesis (SE) – sparsity in synergy representation – i.e., the mapping between degrees-of-freedom (DoF) and synergies is sparse; double-sparsity hypothesis (DS) – a novel view combining both SC and SE – i.e., both the mapping between DoF and synergies and between synergies and actions are sparse, each action implementing a sparse combination of synergies (as in SC), each using a limited set of DoFs (as in SE). We evaluate these hypotheses using hand kinematic data from six human subjects performing nine different types of reach-to-grasp actions. Our results support DS, suggesting that the best action representation is based on a relatively large set of synergies, each involving a reduced number of degrees-of-freedom, and that distinct sets of synergies may be involved in distinct tasks

    Shape description and matching using integral invariants on eccentricity transformed images

    Get PDF
    Matching occluded and noisy shapes is a problem frequently encountered in medical image analysis and more generally in computer vision. To keep track of changes inside the breast, for example, it is important for a computer aided detection system to establish correspondences between regions of interest. Shape transformations, computed both with integral invariants (II) and with geodesic distance, yield signatures that are invariant to isometric deformations, such as bending and articulations. Integral invariants describe the boundaries of planar shapes. However, they provide no information about where a particular feature lies on the boundary with regard to the overall shape structure. Conversely, eccentricity transforms (Ecc) can match shapes by signatures of geodesic distance histograms based on information from inside the shape; but they ignore the boundary information. We describe a method that combines the boundary signature of a shape obtained from II and structural information from the Ecc to yield results that improve on them separately

    Affine differential geometry analysis of human arm movements

    Get PDF
    Humans interact with their environment through sensory information and motor actions. These interactions may be understood via the underlying geometry of both perception and action. While the motor space is typically considered by default to be Euclidean, persistent behavioral observations point to a different underlying geometric structure. These observed regularities include the “two-thirds power law” which connects path curvature with velocity, and “local isochrony” which prescribes the relation between movement time and its extent. Starting with these empirical observations, we have developed a mathematical framework based on differential geometry, Lie group theory and Cartan’s moving frame method for the analysis of human hand trajectories. We also use this method to identify possible motion primitives, i.e., elementary building blocks from which more complicated movements are constructed. We show that a natural geometric description of continuous repetitive hand trajectories is not Euclidean but equi-affine. Specifically, equi-affine velocity is piecewise constant along movement segments, and movement execution time for a given segment is proportional to its equi-affine arc-length. Using this mathematical framework, we then analyze experimentally recorded drawing movements. To examine movement segmentation and classification, the two fundamental equi-affine differential invariants—equi-affine arc-length and curvature are calculated for the recorded movements. We also discuss the possible role of conic sections, i.e., curves with constant equi-affine curvature, as motor primitives and focus in more detail on parabolas, the equi-affine geodesics. Finally, we explore possible schemes for the internal neural coding of motor commands by showing that the equi-affine framework is compatible with the common model of population coding of the hand velocity vector when combined with a simple assumption on its dynamics. We then discuss several alternative explanations for the role that the equi-affine metric may play in internal representations of motion perception and production
    corecore