440 research outputs found

    Quantized Estimation of Gaussian Sequence Models in Euclidean Balls

    Full text link
    A central result in statistical theory is Pinsker's theorem, which characterizes the minimax rate in the normal means model of nonparametric estimation. In this paper, we present an extension to Pinsker's theorem where estimation is carried out under storage or communication constraints. In particular, we place limits on the number of bits used to encode an estimator, and analyze the excess risk in terms of this constraint, the signal size, and the noise level. We give sharp upper and lower bounds for the case of a Euclidean ball, which establishes the Pareto-optimal minimax tradeoff between storage and risk in this setting.Comment: Appearing at NIPS 201

    Robust one-bit compressed sensing with partial circulant matrices

    Get PDF
    We present optimal sample complexity estimates for one-bit compressed sensing problems in a realistic scenario: the procedure uses a structured matrix (a randomly sub-sampled circulant matrix) and is robust to analog pre-quantization noise as well as to adversarial bit corruptions in the quantization process. Our results imply that quantization is not a statistically expensive procedure in the presence of nontrivial analog noise: recovery requires the same sample size one would have needed had the measurement matrix been Gaussian and the noisy analog measurements been given as data

    High-Rate Vector Quantization for the Neyman-Pearson Detection of Correlated Processes

    Full text link
    This paper investigates the effect of quantization on the performance of the Neyman-Pearson test. It is assumed that a sensing unit observes samples of a correlated stationary ergodic multivariate process. Each sample is passed through an N-point quantizer and transmitted to a decision device which performs a binary hypothesis test. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of samples tends to infinity, assuming that the observed process satisfies certain mixing conditions. The main contribution of this paper is to provide a compact closed-form expression of the error exponent in the high-rate regime i.e., when the number N of quantization levels tends to infinity, generalizing previous results of Gupta and Hero to the case of non-independent observations. If d represents the dimension of one sample, it is proved that the error exponent converges at rate N^{2/d} to the one obtained in the absence of quantization. As an application, relevant high-rate quantization strategies which lead to a large error exponent are determined. Numerical results indicate that the proposed quantization rule can yield better performance than existing ones in terms of detection error.Comment: 47 pages, 7 figures, 1 table. To appear in the IEEE Transactions on Information Theor
    corecore