559,878 research outputs found

    Power-Constrained Sparse Gaussian Linear Dimensionality Reduction over Noisy Channels

    Get PDF
    In this paper, we investigate power-constrained sensing matrix design in a sparse Gaussian linear dimensionality reduction framework. Our study is carried out in a single--terminal setup as well as in a multi--terminal setup consisting of orthogonal or coherent multiple access channels (MAC). We adopt the mean square error (MSE) performance criterion for sparse source reconstruction in a system where source-to-sensor channel(s) and sensor-to-decoder communication channel(s) are noisy. Our proposed sensing matrix design procedure relies upon minimizing a lower-bound on the MSE in single-- and multiple--terminal setups. We propose a three-stage sensing matrix optimization scheme that combines semi-definite relaxation (SDR) programming, a low-rank approximation problem and power-rescaling. Under certain conditions, we derive closed-form solutions to the proposed optimization procedure. Through numerical experiments, by applying practical sparse reconstruction algorithms, we show the superiority of the proposed scheme by comparing it with other relevant methods. This performance improvement is achieved at the price of higher computational complexity. Hence, in order to address the complexity burden, we present an equivalent stochastic optimization method to the problem of interest that can be solved approximately, while still providing a superior performance over the popular methods.Comment: Accepted for publication in IEEE Transactions on Signal Processing (16 pages

    Improved rank bounds for design matrices and a new proof of Kelly's theorem

    Full text link
    We study the rank of complex sparse matrices in which the supports of different columns have small intersections. The rank of these matrices, called design matrices, was the focus of a recent work by Barak et. al. (BDWY11) in which they were used to answer questions regarding point configurations. In this work we derive near-optimal rank bounds for these matrices and use them to obtain asymptotically tight bounds in many of the geometric applications. As a consequence of our improved analysis, we also obtain a new, linear algebraic, proof of Kelly's theorem, which is the complex analog of the Sylvester-Gallai theorem

    Minimax risks for sparse regressions: Ultra-high-dimensional phenomenons

    Full text link
    Consider the standard Gaussian linear regression model Y=Xθ+ϵY=X\theta+\epsilon, where YRnY\in R^n is a response vector and XRnp X\in R^{n*p} is a design matrix. Numerous work have been devoted to building efficient estimators of θ\theta when pp is much larger than nn. In such a situation, a classical approach amounts to assume that θ0\theta_0 is approximately sparse. This paper studies the minimax risks of estimation and testing over classes of kk-sparse vectors θ\theta. These bounds shed light on the limitations due to high-dimensionality. The results encompass the problem of prediction (estimation of XθX\theta), the inverse problem (estimation of θ0\theta_0) and linear testing (testing Xθ=0X\theta=0). Interestingly, an elbow effect occurs when the number of variables klog(p/k)k\log(p/k) becomes large compared to nn. Indeed, the minimax risks and hypothesis separation distances blow up in this ultra-high dimensional setting. We also prove that even dimension reduction techniques cannot provide satisfying results in an ultra-high dimensional setting. Moreover, we compute the minimax risks when the variance of the noise is unknown. The knowledge of this variance is shown to play a significant role in the optimal rates of estimation and testing. All these minimax bounds provide a characterization of statistical problems that are so difficult so that no procedure can provide satisfying results

    Covering of Subspaces by Subspaces

    Full text link
    Lower and upper bounds on the size of a covering of subspaces in the Grassmann graph \cG_q(n,r) by subspaces from the Grassmann graph \cG_q(n,k), krk \geq r, are discussed. The problem is of interest from four points of view: coding theory, combinatorial designs, qq-analogs, and projective geometry. In particular we examine coverings based on lifted maximum rank distance codes, combined with spreads and a recursive construction. New constructions are given for q=2q=2 with r=2r=2 or r=3r=3. We discuss the density for some of these coverings. Tables for the best known coverings, for q=2q=2 and 5n105 \leq n \leq 10, are presented. We present some questions concerning possible constructions of new coverings of smaller size.Comment: arXiv admin note: text overlap with arXiv:0805.352
    corecore