8,458 research outputs found

    Likelihood-based statistical estimation from quantized data

    Get PDF
    Most standard statistical methods treat numerical data as if they were real (infinitenumber- of-decimal-places) observations. The issue of quantization or digital resolution is recognized by engineers and metrologists, but is largely ignored by statisticians and can render standard statistical methods inappropriate and misleading. This article discusses some of the difficulties of interpretation and corresponding difficulties of inference arising in even very simple measurement contexts, once the presence of quantization is admitted. It then argues (using the simple case of confidence interval estimation based on a quantized random sample from a normal distribution as a vehicle) for the use of statistical methods based on rounded data likelihood functions as an effective way of dealing with the issue. --

    High Dimensional Statistical Estimation under Uniformly Dithered One-bit Quantization

    Full text link
    In this paper, we propose a uniformly dithered 1-bit quantization scheme for high-dimensional statistical estimation. The scheme contains truncation, dithering, and quantization as typical steps. As canonical examples, the quantization scheme is applied to the estimation problems of sparse covariance matrix estimation, sparse linear regression (i.e., compressed sensing), and matrix completion. We study both sub-Gaussian and heavy-tailed regimes, where the underlying distribution of heavy-tailed data is assumed to have bounded moments of some order. We propose new estimators based on 1-bit quantized data. In sub-Gaussian regime, our estimators achieve near minimax rates, indicating that our quantization scheme costs very little. In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in an 1-bit quantized and heavy-tailed setting, or already improve on existing comparable results from some respect. Under the observations in our setting, the rates are almost tight in compressed sensing and matrix completion. Our 1-bit compressed sensing results feature general sensing vector that is sub-Gaussian or even heavy-tailed. We also first investigate a novel setting where both the covariate and response are quantized. In addition, our approach to 1-bit matrix completion does not rely on likelihood and represent the first method robust to pre-quantization noise with unknown distribution. Experimental results on synthetic data are presented to support our theoretical analysis.Comment: We add lower bounds for 1-bit quantization of heavy-tailed data (Theorems 11, 14

    Estimation from quantized Gaussian measurements: when and how to use dither

    Full text link
    Subtractive dither is a powerful method for removing the signal dependence of quantization noise for coarsely quantized signals. However, estimation from dithered measurements often naively applies the sample mean or midrange, even when the total noise is not well described with a Gaussian or uniform distribution. We show that the generalized Gaussian distribution approximately describes subtractively dithered, quantized samples of a Gaussian signal. Furthermore, a generalized Gaussian fit leads to simple estimators based on order statistics that match the performance of more complicated maximum likelihood estimators requiring iterative solvers. The order statistics-based estimators outperform both the sample mean and midrange for nontrivial sums of Gaussian and uniform noise. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. Specifically, we find subtractive dither to be beneficial when the ratio between the Gaussian standard deviation and quantization interval length is roughly less than one-third. When that ratio is also greater than 0.822/K^0.930 for the number of measurements K > 20, estimators we present are more efficient than the midrange.https://arxiv.org/abs/1811.06856Accepted manuscrip

    Bayes-Optimal Joint Channel-and-Data Estimation for Massive MIMO with Low-Precision ADCs

    Get PDF
    This paper considers a multiple-input multiple-output (MIMO) receiver with very low-precision analog-to-digital convertors (ADCs) with the goal of developing massive MIMO antenna systems that require minimal cost and power. Previous studies demonstrated that the training duration should be {\em relatively long} to obtain acceptable channel state information. To address this requirement, we adopt a joint channel-and-data (JCD) estimation method based on Bayes-optimal inference. This method yields minimal mean square errors with respect to the channels and payload data. We develop a Bayes-optimal JCD estimator using a recent technique based on approximate message passing. We then present an analytical framework to study the theoretical performance of the estimator in the large-system limit. Simulation results confirm our analytical results, which allow the efficient evaluation of the performance of quantized massive MIMO systems and provide insights into effective system design.Comment: accepted in IEEE Transactions on Signal Processin

    Fusing Censored Dependent Data for Distributed Detection

    Full text link
    In this paper, we consider a distributed detection problem for a censoring sensor network where each sensor's communication rate is significantly reduced by transmitting only "informative" observations to the Fusion Center (FC), and censoring those deemed "uninformative". While the independence of data from censoring sensors is often assumed in previous research, we explore spatial dependence among observations. Our focus is on designing the fusion rule under the Neyman-Pearson (NP) framework that takes into account the spatial dependence among observations. Two transmission scenarios are considered, one where uncensored observations are transmitted directly to the FC and second where they are first quantized and then transmitted to further improve transmission efficiency. Copula-based Generalized Likelihood Ratio Test (GLRT) for censored data is proposed with both continuous and discrete messages received at the FC corresponding to different transmission strategies. We address the computational issues of the copula-based GLRTs involving multidimensional integrals by presenting more efficient fusion rules, based on the key idea of injecting controlled noise at the FC before fusion. Although, the signal-to-noise ratio (SNR) is reduced by introducing controlled noise at the receiver, simulation results demonstrate that the resulting noise-aided fusion approach based on adding artificial noise performs very closely to the exact copula-based GLRTs. Copula-based GLRTs and their noise-aided counterparts by exploiting the spatial dependence greatly improve detection performance compared with the fusion rule under independence assumption
    • …
    corecore