3 research outputs found

    Vector Approximate Message Passing Algorithm for Structured Perturbed Sensing Matrix

    Full text link
    In this paper, we consider a general form of noisy compressive sensing (CS) where the sensing matrix is not precisely known. Such cases exist when there are imperfections or unknown calibration parameters during the measurement process. Particularly, the sensing matrix may have some structure, which makes the perturbation follow a fixed pattern. While previous work has focused on extending the approximate message passing (AMP) and LASSO algorithm to deal with the independent and identically distributed (i.i.d.) perturbation, we propose the robust variant vector approximate message passing (VAMP) algorithm with the perturbation being structured, based on the recent VAMP algorithm. The performance of the robust version of VAMP is demonstrated numerically.Comment: 6 pages, 3 figure

    Asymptotically Optimal One-Bit Quantizer Design for Weak-signal Detection in Generalized Gaussian Noise and Lossy Binary Communication Channel

    Full text link
    In this paper, quantizer design for weak-signal detection under arbitrary binary channel in generalized Gaussian noise is studied. Since the performances of the generalized likelihood ratio test (GLRT) and Rao test are asymptotically characterized by the noncentral chi-squared probability density function (PDF), the threshold design problem can be formulated as a noncentrality parameter maximization problem. The theoretical property of the noncentrality parameter with respect to the threshold is investigated, and the optimal threshold is shown to be found in polynomial time with appropriate numerical algorithm and proper initializations. In certain cases, the optimal threshold is proved to be zero. Finally, numerical experiments are conducted to substantiate the theoretical analysis

    Robust Least Squares for Quantized Data Matrices

    Full text link
    In this paper we formulate and solve a robust least squares problem for a system of linear equations subject to quantization error in the data matrix. Ordinary least squares fails to consider uncertainty in the operator, modeling all noise in the observed signal. Total least squares accounts for uncertainty in the data matrix, but necessarily increases the condition number of the operator compared to ordinary least squares. Tikhonov regularization or ridge regression is frequently employed to combat ill-conditioning, but requires parameter tuning which presents a host of challenges and places strong assumptions on parameter prior distributions. The proposed method also requires selection of a parameter, but it can be chosen in a natural way, e.g., a matrix rounded to the 4th digit uses an uncertainty bounding parameter of 0.5e-4. We show here that our robust method is theoretically appropriate, tractable, and performs favorably against ordinary and total least squares.Comment: 10 pages, 5 figure
    corecore