32,233 research outputs found

    Channel Covariance Matrix Estimation via Dimension Reduction for Hybrid MIMO MmWave Communication Systems

    Get PDF
    Hybrid massive MIMO structures with lower hardware complexity and power consumption have been considered as a potential candidate for millimeter wave (mmWave) communications. Channel covariance information can be used for designing transmitter precoders, receiver combiners, channel estimators, etc. However, hybrid structures allow only a lower-dimensional signal to be observed, which adds difficulties for channel covariance matrix estimation. In this paper, we formulate the channel covariance estimation as a structured low-rank matrix sensing problem via Kronecker product expansion and use a low-complexity algorithm to solve this problem. Numerical results with uniform linear arrays (ULA) and uniform squared planar arrays (USPA) are provided to demonstrate the effectiveness of our proposed method

    Off-the-Grid Line Spectrum Denoising and Estimation with Multiple Measurement Vectors

    Full text link
    Compressed Sensing suggests that the required number of samples for reconstructing a signal can be greatly reduced if it is sparse in a known discrete basis, yet many real-world signals are sparse in a continuous dictionary. One example is the spectrally-sparse signal, which is composed of a small number of spectral atoms with arbitrary frequencies on the unit interval. In this paper we study the problem of line spectrum denoising and estimation with an ensemble of spectrally-sparse signals composed of the same set of continuous-valued frequencies from their partial and noisy observations. Two approaches are developed based on atomic norm minimization and structured covariance estimation, both of which can be solved efficiently via semidefinite programming. The first approach aims to estimate and denoise the set of signals from their partial and noisy observations via atomic norm minimization, and recover the frequencies via examining the dual polynomial of the convex program. We characterize the optimality condition of the proposed algorithm and derive the expected convergence rate for denoising, demonstrating the benefit of including multiple measurement vectors. The second approach aims to recover the population covariance matrix from the partially observed sample covariance matrix by motivating its low-rank Toeplitz structure without recovering the signal ensemble. Performance guarantee is derived with a finite number of measurement vectors. The frequencies can be recovered via conventional spectrum estimation methods such as MUSIC from the estimated covariance matrix. Finally, numerical examples are provided to validate the favorable performance of the proposed algorithms, with comparisons against several existing approaches.Comment: 14 pages, 10 figure

    Alternating projections gridless covariance-based estimation for DOA

    Full text link
    We present a gridless sparse iterative covariance-based estimation method based on alternating projections for direction-of-arrival (DOA) estimation. The gridless DOA estimation is formulated in the reconstruction of Toeplitz-structured low rank matrix, and is solved efficiently with alternating projections. The method improves resolution by achieving sparsity, deals with single-snapshot data and coherent arrivals, and, with co-prime arrays, estimates more DOAs than the number of sensors. We evaluate the proposed method using simulation results focusing on co-prime arrays.Comment: 5 pages, accepted by (ICASSP 2021) 2021 IEEE International Conference on Acoustics, Speech, and Signal Processin

    User-Friendly Covariance Estimation for Heavy-Tailed Distributions

    Get PDF
    We offer a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce element-wise and spectrum-wise truncation operators, as well as their MM-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key observation is that the estimators needs to adapt to the sample size, dimensionality of the data and the noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate their practical use, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods.Comment: 56 pages, 2 figure

    High-dimensional Statistical Inference: from Vector to Matrix

    Get PDF
    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, δkA3˘c1/3\delta_k^A\u3c1/3, δkA+θk,kA3˘c1\delta_k^A+\theta_{k,k}^A \u3c1, or δtkA3˘c(t−1)/t\delta_{tk}^A \u3c \sqrt{(t-1)/t} for any given constant t≥4/3t\ge {4/3} guarantee the exact recovery of all kk sparse signals in the noiseless case through the constrained ℓ1\ell_1 minimization, and similarly in affine rank minimization δrM3˘c1/3\delta_r^\mathcal{M}\u3c1/3, δrM+θr,rM3˘c1\delta_r^{\mathcal{M}}+\theta_{r, r}^{\mathcal{M}}\u3c1, or δtrM3˘c(t−1)/t\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t} ensure the exact reconstruction of all matrices with rank at most rr in the noiseless case via the constrained nuclear norm minimization. Moreover, for any ϵ3˘e0\epsilon\u3e0, δkA3˘c1/3+ϵ\delta_{k}^A \u3c 1/3+\epsilon, δkA+θk,kA3˘c1+ϵ\delta_k^A+\theta_{k,k}^A\u3c1+\epsilon, or δtkA3˘ct−1t+ϵ\delta_{tk}^A\u3c\sqrt{\frac{t-1}{t}}+\epsilon are not sufficient to guarantee the exact recovery of all kk-sparse signals for large kk. Similar result also holds for matrix recovery. In addition, the conditions δkA3˘c1/3\delta_k^A\u3c1/3, δkA+θk,kA3˘c1\delta_k^A+\theta_{k,k}^A\u3c1, δtkA3˘c(t−1)/t\delta_{tk}^A \u3c \sqrt{(t-1)/t} and δrM3˘c1/3\delta_r^\mathcal{M}\u3c1/3, δrM+θr,rM3˘c1\delta_r^\mathcal{M}+\theta_{r,r}^\mathcal{M}\u3c1, δtrM3˘c(t−1)/t\delta_{tr}^\mathcal{M}\u3c \sqrt{(t-1)/t} are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival

    Joint Covariance Estimation with Mutual Linear Structure

    Full text link
    We consider the problem of joint estimation of structured covariance matrices. Assuming the structure is unknown, estimation is achieved using heterogeneous training sets. Namely, given groups of measurements coming from centered populations with different covariances, our aim is to determine the mutual structure of these covariance matrices and estimate them. Supposing that the covariances span a low dimensional affine subspace in the space of symmetric matrices, we develop a new efficient algorithm discovering the structure and using it to improve the estimation. Our technique is based on the application of principal component analysis in the matrix space. We also derive an upper performance bound of the proposed algorithm in the Gaussian scenario and compare it with the Cramer-Rao lower bound. Numerical simulations are presented to illustrate the performance benefits of the proposed method
    • …
    corecore