10 research outputs found

    Recovering Jointly Sparse Signals via Joint Basis Pursuit

    Get PDF
    This work considers recovery of signals that are sparse over two bases. For instance, a signal might be sparse in both time and frequency, or a matrix can be low rank and sparse simultaneously. To facilitate recovery, we consider minimizing the sum of the 1\ell_1-norms that correspond to each basis, which is a tractable convex approach. We find novel optimality conditions which indicates a gain over traditional approaches where 1\ell_1 minimization is done over only one basis. Next, we analyze these optimality conditions for the particular case of time-frequency bases. Denoting sparsity in the first and second bases by k1,k2k_1,k_2 respectively, we show that, for a general class of signals, using this approach, one requires as small as O(max{k1,k2}loglogn)O(\max\{k_1,k_2\}\log\log n) measurements for successful recovery hence overcoming the classical requirement of Θ(min{k1,k2}log(nmin{k1,k2}))\Theta(\min\{k_1,k_2\}\log(\frac{n}{\min\{k_1,k_2\}})) for 1\ell_1 minimization when k1k2k_1\approx k_2. Extensive simulations show that, our analysis is approximately tight.Comment: 8 pages, 1 figure, submitted to ISIT 201

    Necessary and sufficient conditions of solution uniqueness in 1\ell_1 minimization

    Full text link
    This paper shows that the solutions to various convex 1\ell_1 minimization problems are \emph{unique} if and only if a common set of conditions are satisfied. This result applies broadly to the basis pursuit model, basis pursuit denoising model, Lasso model, as well as other 1\ell_1 models that either minimize f(Axb)f(Ax-b) or impose the constraint f(Axb)σf(Ax-b)\leq\sigma, where ff is a strictly convex function. For these models, this paper proves that, given a solution xx^* and defining I=\supp(x^*) and s=\sign(x^*_I), xx^* is the unique solution if and only if AIA_I has full column rank and there exists yy such that AITy=sA_I^Ty=s and aiTy<1|a_i^Ty|_\infty<1 for i∉Ii\not\in I. This condition is previously known to be sufficient for the basis pursuit model to have a unique solution supported on II. Indeed, it is also necessary, and applies to a variety of other 1\ell_1 models. The paper also discusses ways to recognize unique solutions and verify the uniqueness conditions numerically.Comment: 6 pages; revised version; submitte

    Self-Calibration and Biconvex Compressive Sensing

    Full text link
    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis

    Low rank representations of matrices using nuclear norm heuristics

    Get PDF
    2014 Summer.The pursuit of low dimensional structure from high dimensional data leads in many instances to the finding the lowest rank matrix among a parameterized family of matrices. In its most general setting, this problem is NP-hard. Different heuristics have been introduced for approaching the problem. Among them is the nuclear norm heuristic for rank minimization. One aspect of this thesis is the application of the nuclear norm heuristic to the Euclidean distance matrix completion problem. As a special case, the approach is applied to the graph embedding problem. More generally, semi-definite programming, convex optimization, and the nuclear norm heuristic are applied to the graph embedding problem in order to extract invariants such as the chromatic number, Rn embeddability, and Borsuk-embeddability. In addition, we apply related techniques to decompose a matrix into components which simultaneously minimize a linear combination of the nuclear norm and the spectral norm. In the case when the Euclidean distance matrix is the distance matrix for a complete k-partite graph it is shown that the nuclear norm of the associated positive semidefinite matrix can be evaluated in terms of the second elementary symmetric polynomial evaluated at the partition. We prove that for k-partite graphs the maximum value of the nuclear norm of the associated positive semidefinite matrix is attained in the situation when we have equal number of vertices in each set of the partition. We use this result to determine a lower bound on the chromatic number of the graph. Finally, we describe a convex optimization approach to decomposition of a matrix into two components using the nuclear norm and spectral norm

    Sharp MSE Bounds for Proximal Denoising

    Get PDF
    Denoising has to do with estimating a signal x_0 from its noisy observations y = x_0 + z. In this paper, we focus on the “structured denoising problem,” where the signal x_0 possesses a certain structure and z has independent normally distributed entries with mean zero and variance σ^2. We employ a structure-inducing convex function f(⋅) and solve min_x {1/2 ∥y−x∥^2_2 +σλf(x)}to estimate x_0, for some λ>0. Common choices for f(⋅) include the ℓ_1 norm for sparse vectors, the ℓ_1 −ℓ_2 norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate x∗ is the normalized mean-squared error NMSE(σ)=E∥x∗ − x_0∥^2_2/σ^2. We show that NMSE is maximized as σ→0 and we find the exact worst-case NMSE, which has a simple geometric interpretation: the mean-squared distance of a standard normal vector to the λ-scaled subdifferential λ∂f(x_0). When λ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem min_(f(x)≤f(x_0)){∥y−x∥2}. The paper also connects these results to the generalized LASSO problem, in which one solves min_(f(x)≤f(x_0)){∥y−Ax∥2} to estimate x_0 from noisy linear observations y=Ax_0 + z. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a “phase transition” as a function of number of observations. We also provide an order-optimal bound for the LASSO error in terms of the mean-squared distance. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems
    corecore