355 research outputs found

    Frequency-Selective Vandermonde Decomposition of Toeplitz Matrices with Applications

    Full text link
    The classical result of Vandermonde decomposition of positive semidefinite Toeplitz matrices, which dates back to the early twentieth century, forms the basis of modern subspace and recent atomic norm methods for frequency estimation. In this paper, we study the Vandermonde decomposition in which the frequencies are restricted to lie in a given interval, referred to as frequency-selective Vandermonde decomposition. The existence and uniqueness of the decomposition are studied under explicit conditions on the Toeplitz matrix. The new result is connected by duality to the positive real lemma for trigonometric polynomials nonnegative on the same frequency interval. Its applications in the theory of moments and line spectral estimation are illustrated. In particular, it provides a solution to the truncated trigonometric KK-moment problem. It is used to derive a primal semidefinite program formulation of the frequency-selective atomic norm in which the frequencies are known {\em a priori} to lie in certain frequency bands. Numerical examples are also provided.Comment: 23 pages, accepted by Signal Processin

    Atomic norm denoising with applications to line spectral estimation

    Get PDF
    Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error (MSE) performance in the presence of noise and without knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials. We show that the associated convex optimization problem can be solved in polynomial time via semidefinite programming (SDP). We also show that the SDP can be approximated by an l1-regularized least-squares problem that achieves nearly the same error rate as the SDP but can scale to much larger problems. We compare both SDP and l1-based approaches with classical line spectral analysis methods and demonstrate that the SDP outperforms the l1 optimization which outperforms MUSIC, Cadzow's, and Matrix Pencil approaches in terms of MSE over a wide range of signal-to-noise ratios.Comment: 27 pages, 10 figures. A preliminary version of this work appeared in the Proceedings of the 49th Annual Allerton Conference in September 2011. Numerous numerical experiments added to this version in accordance with suggestions by anonymous reviewer

    Positive semi-definite embedding for dimensionality reduction and out-of-sample extensions

    Full text link
    In machine learning or statistics, it is often desirable to reduce the dimensionality of a sample of data points in a high dimensional space Rd\mathbb{R}^d. This paper introduces a dimensionality reduction method where the embedding coordinates are the eigenvectors of a positive semi-definite kernel obtained as the solution of an infinite dimensional analogue of a semi-definite program. This embedding is adaptive and non-linear. A main feature of our approach is the existence of a non-linear out-of-sample extension formula of the embedding coordinates, called a projected Nystr\"om approximation. This extrapolation formula yields an extension of the kernel matrix to a data-dependent Mercer kernel function. Our empirical results indicate that this embedding method is more robust with respect to the influence of outliers, compared with a spectral embedding method.Comment: 16 pages, 5 figures. Improved presentatio

    PRISMA: PRoximal Iterative SMoothing Algorithm

    Full text link
    Motivated by learning problems including max-norm regularized matrix completion and clustering, robust PCA and sparse inverse covariance selection, we propose a novel optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple non-smooth non-Lipschitz part. We use a time variant smoothing strategy that allows us to obtain a guarantee that does not depend on knowing in advance the total number of iterations nor a bound on the domain

    A Coordinate Descent Approach to Atomic Norm Minimization

    Full text link
    Atomic norm minimization is of great interest in various applications of sparse signal processing including super-resolution line-spectral estimation and signal denoising. In practice, atomic norm minimization (ANM) is formulated as a semi-definite programming (SDP) which is generally hard to solve. This work introduces a low-complexity, matrix-free method for solving ANM. The method uses the framework of coordinate descent and exploits the sparsity-induced nature of atomic-norm regularization. Specifically, an equivalent, non-convex formulation of ANM is first proposed. It is then proved that applying the coordinate descent framework on the non-convex formulation leads to convergence to the global optimal point. For the case of a single measurement vector of length N in discrete fourier transform (DFT) basis, the complexity of each iteration in the coordinate descent procedure is O(N log N ), rendering the proposed method efficient even for large-scale problems. The proposed coordinate descent framework can be readily modified to solve a variety of ANM problems, including multi-dimensional ANM with multiple measurement vectors. It is easy to implement and can essentially be applied to any atomic sets as long as a corresponding rank-1 problem can be solved. Through extensive numerical simulations, it is verified that for solving sparse problems the proposed method is much faster than the alternating direction method of multipliers (ADMM) or the customized interior point SDP solver

    Rank-Sparsity Incoherence for Matrix Decomposition

    Get PDF
    Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is NP-hard in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the â„“1\ell_1 norm and the nuclear norm of the components. We develop a notion of \emph{rank-sparsity incoherence}, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature, with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems
    • …
    corecore