74 research outputs found

    Isotropically Random Orthogonal Matrices: Performance of LASSO and Minimum Conic Singular Values

    Full text link
    Recently, the precise performance of the Generalized LASSO algorithm for recovering structured signals from compressed noisy measurements, obtained via i.i.d. Gaussian matrices, has been characterized. The analysis is based on a framework introduced by Stojnic and heavily relies on the use of Gordon's Gaussian min-max theorem (GMT), a comparison principle on Gaussian processes. As a result, corresponding characterizations for other ensembles of measurement matrices have not been developed. In this work, we analyze the corresponding performance of the ensemble of isotropically random orthogonal (i.r.o.) measurements. We consider the constrained version of the Generalized LASSO and derive a sharp characterization of its normalized squared error in the large-system limit. When compared to its Gaussian counterpart, our result analytically confirms the superiority in performance of the i.r.o. ensemble. Our second result, derives an asymptotic lower bound on the minimum conic singular values of i.r.o. matrices. This bound is larger than the corresponding bound on Gaussian matrices. To prove our results we express i.r.o. matrices in terms of Gaussians and show that, with some modifications, the GMT framework is still applicable

    Optimal Placement of Distributed Energy Storage in Power Networks

    Full text link
    We formulate the optimal placement, sizing and control of storage devices in a power network to minimize generation costs with the intent of load shifting. We assume deterministic demand, a linearized DC approximated power flow model and a fixed available storage budget. Our main result proves that when the generation costs are convex and nondecreasing, there always exists an optimal storage capacity allocation that places zero storage at generation-only buses that connect to the rest of the network via single links. This holds regardless of the demand profiles, generation capacities, line-flow limits and characteristics of the storage technologies. Through a counterexample, we illustrate that this result is not generally true for generation buses with multiple connections. For specific network topologies, we also characterize the dependence of the optimal generation cost on the available storage budget, generation capacities and flow constraints.Comment: 15 pages, 9 figures, generalized result to include line losses in Section 4

    The Squared-Error of Generalized LASSO: A Precise Analysis

    Get PDF
    We consider the problem of estimating an unknown signal x0x_0 from noisy linear observations y=Ax0+z∈Rmy = Ax_0 + z\in R^m. In many practical instances, x0x_0 has a certain structure that can be captured by a structure inducing convex function f(β‹…)f(\cdot). For example, β„“1\ell_1 norm can be used to encourage a sparse solution. To estimate x0x_0 with the aid of f(β‹…)f(\cdot), we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix AA and the noise vector zz have zero-mean normal distributions with variances 11 and Οƒ2\sigma^2 respectively. For the LASSO estimator xβˆ—x^*, we attempt to calculate the Normalized Square Error (NSE) defined as βˆ₯xβˆ—βˆ’x0βˆ₯22Οƒ2\frac{\|x^*-x_0\|_2^2}{\sigma^2} as a function of the noise level Οƒ\sigma, the number of observations mm and the structure of the signal. We show that, the structure of the signal x0x_0 and choice of the function f(β‹…)f(\cdot) enter the error formulae through the summary parameters D(cone)D(cone) and D(Ξ»)D(\lambda), which are defined as the Gaussian squared-distances to the subdifferential cone and to the Ξ»\lambda-scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of f(x0)f(x_0) and is given by arg⁑min⁑x{βˆ₯yβˆ’Axβˆ₯2Β subjectΒ toΒ f(x)≀f(x0)}\arg\min_{x}\{{\|y-Ax\|_2}~\text{subject to}~f(x)\leq f(x_0)\}. We prove that its worst case NSE is achieved when Οƒβ†’0\sigma\rightarrow 0 and concentrates around D(cone)mβˆ’D(cone)\frac{D(cone)}{m-D(cone)}. Secondly, we consider arg⁑min⁑x{βˆ₯yβˆ’Axβˆ₯2+Ξ»f(x)}\arg\min_{x}\{\|y-Ax\|_2+\lambda f(x)\}, for some Ξ»β‰₯0\lambda\geq 0. This time the NSE formula depends on the choice of Ξ»\lambda and is given by D(Ξ»)mβˆ’D(Ξ»)\frac{D(\lambda)}{m-D(\lambda)}. We then establish a mapping between this and the third estimator arg⁑min⁑x{12βˆ₯yβˆ’Axβˆ₯22+Ξ»f(x)}\arg\min_{x}\{\frac{1}{2}\|y-Ax\|_2^2+ \lambda f(x)\}. Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE

    Simple Error Bounds for Regularized Noisy Linear Inverse Problems

    Get PDF
    Consider estimating a structured signal x0\mathbf{x}_0 from linear, underdetermined and noisy measurements y=Ax0+z\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}, via solving a variant of the lasso algorithm: x^=arg⁑min⁑x{βˆ₯yβˆ’Axβˆ₯2+Ξ»f(x)}\hat{\mathbf{x}}=\arg\min_\mathbf{x}\{ \|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2+\lambda f(\mathbf{x})\}. Here, ff is a convex function aiming to promote the structure of x0\mathbf{x}_0, say β„“1\ell_1-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of A\mathbf{A} are independent and normally distributed and make no assumptions on the noise vector z\mathbf{z}, other than it being independent of A\mathbf{A}. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the β„“2\ell_2-norm of the estimation error βˆ₯x^βˆ’x0βˆ₯2\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2. Our bound is geometric in nature and obeys a simple formula; the roles of Ξ»\lambda, ff and x0\mathbf{x}_0 are all captured by a single summary parameter Ξ΄(Ξ»βˆ‚((f(x0)))\delta(\lambda\partial((f(\mathbf{x}_0))), termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.Comment: 6pages, 2 figur

    Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information

    Get PDF
    This paper considers the linear inverse problem where we wish to estimate a structured signal xx from its corrupted observations. When the problem is ill-posed, it is natural to make use of a convex function f(β‹…)f(\cdot) that exploits the structure of the signal. For example, β„“1\ell_1 norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value f(x)f(x) or the β„“2\ell_2-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the "sparsity" of the signal rather than the ambient dimension of the noise vector. We connect our results to existing works and provide a discussion on the relation of our results to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise.Comment: 13 page

    Analysis and Optimization of Aperture Design in Computational Imaging

    Full text link
    There is growing interest in the use of coded aperture imaging systems for a variety of applications. Using an analysis framework based on mutual information, we examine the fundamental limits of such systems---and the associated optimum aperture coding---under simple but meaningful propagation and sensor models. Among other results, we show that when thermal noise dominates, spectrally-flat masks, which have 50% transmissivity, are optimal, but that when shot noise dominates, randomly generated masks with lower transmissivity offer greater performance. We also provide comparisons to classical pinhole cameras
    • …
    corecore