154,848 research outputs found

    Mean Estimation from One-Bit Measurements

    Full text link
    We consider the problem of estimating the mean of a symmetric log-concave distribution under the constraint that only a single bit per sample from this distribution is available to the estimator. We study the mean squared error as a function of the sample size (and hence the number of bits). We consider three settings: first, a centralized setting, where an encoder may release nn bits given a sample of size nn, and for which there is no asymptotic penalty for quantization; second, an adaptive setting in which each bit is a function of the current observation and previously recorded bits, where we show that the optimal relative efficiency compared to the sample mean is precisely the efficiency of the median; lastly, we show that in a distributed setting where each bit is only a function of a local sample, no estimator can achieve optimal efficiency uniformly over the parameter space. We additionally complement our results in the adaptive setting by showing that \emph{one} round of adaptivity is sufficient to achieve optimal mean-square error

    Mean Estimation from Adaptive One-bit Measurements

    Full text link
    We consider the problem of estimating the mean of a normal distribution under the following constraint: the estimator can access only a single bit from each sample from this distribution. We study the squared error risk in this estimation as a function of the number of samples and one-bit measurements nn. We consider an adaptive estimation setting where the single-bit sent at step nn is a function of both the new sample and the previous n−1n-1 acquired bits. For this setting, we show that no estimator can attain asymptotic mean squared error smaller than π/(2n)+O(n−2)\pi/(2n)+O(n^{-2}) times the variance. In other words, one-bit restriction increases the number of samples required for a prescribed accuracy of estimation by a factor of at least π/2\pi/2 compared to the unrestricted case. In addition, we provide an explicit estimator that attains this asymptotic error, showing that, rather surprisingly, only π/2\pi/2 times more samples are required in order to attain estimation performance equivalent to the unrestricted case

    One-bit Compressed Sensing in the Presence of Noise

    Get PDF
    Many modern real-world systems generate large amounts of high-dimensional data stressing the available computing and signal processing systems. In resource-constrained settings, it is desirable to process, store and transmit as little amount of data as possible. It has been shown that one can obtain acceptable performance for tasks such as inference and reconstruction using fewer bits of data by exploiting low-dimensional structures on data such as sparsity. This dissertation investigates the signal acquisition paradigm known as one-bit compressed sensing (one-bit CS) for signal reconstruction and parameter estimation. We first consider the problem of joint sparse support estimation with one-bit measurements in a distributed setting. Each node observes sparse signals with the same but unknown support. The goal is to minimize the probability of error of support estimation. First, we study the performance of maximum likelihood (ML) estimation of the support set from one-bit compressed measurements when all these measurements are available at the fusion center. We provide a lower bound on the number of one-bit measurements required per node for vanishing probability of error. Though the ML estimator is optimal, its computational complexity increases exponentially with the signal dimension. So, we propose computationally tractable algorithms in a centralized setting. Further, we extend these algorithms to a decentralized setting where each node can communicate only with its one-hop neighbors. The proposed method shows excellent estimation performance even in the presence of noise. In the second part of the dissertation, we investigate the problem of sparse signal reconstruction from noisy one-bit compressed measurements using a signal that is statistically dependent on the compressed signal as an aid. We refer to this signal as side-information. We consider a generalized measurement model of one-bit CS where noise is assumed to be added at two stages of the measurement process- a) before quantizationand b) after quantization. We model the noise before quantization as additive white Gaussian noise and the noise after quantization as a sign-flip noise generated from a Bernoulli distribution. We assume that the SI at the receiver is noisy. The noise in the SI can be either in the support or in the amplitude, or both. This nature of the noise in SI suggests that the noise has a sparse structure. We use additive independent and identically distributed Laplacian noise to model such sparse nature of the noise. In this setup, we develop tractable algorithms that approximate the minimum mean square error (MMSE) estimator of the signal. We consider the following three different SI-based scenarios: 1. The side-information is assumed to be a noisy version of the signal. The noise is independent of the signal and follows the Laplacian distribution. We do not assume any temporal dependence in the signal.2. The signal exhibits temporal dependencies between signals at the current time instant and the previous time instant. The temporal dependence is modeled using the birth-death-drift (BDD) model. The side-information is a noisy version of the previous time instant signal, which is statistically dependent on the signal as defined by the BDD model. 3. The SI available at the receiver is heterogeneous. The signal and side-information are from different modalities and may not share joint sparse representation. We assume that the SI and the sparse signal are dependent and use the Copula function to model the dependence. In each of these scenarios, we develop generalized approximate message passing-based algorithms to approximate the minimum mean square error estimate. Numerical results show the effectiveness of the proposed algorithm. In the final part of the dissertation, we propose two one-bit compressed sensing reconstruction algorithms that use a deep neural network as a prior on the signal. In the first algorithm, we use a trained Generative model such as Generative Adversarial Networks and Variational Autoencoders as a prior. This trained network is used to reconstruct the compressed signal from one-bit measurements by searching over its range. We provide theoretical guarantees on the reconstruction accuracy and sample complexity of the presented algorithm. In the second algorithm, we investigate an untrained neural network architecture so that it acts as a good prior on natural signals such as images and audio. We formulate an optimization problem to reconstruct the signal from one-bit measurements using this untrained network. We demonstrate the superior performance of the proposed algorithms through numerical results. Further, in contrast to competing model-based algorithms, we demonstrate that the proposed algorithms estimate both direction and magnitude of the compressed signal from one-bit measurements

    Signal Recovery From 1-Bit Quantized Noisy Samples via Adaptive Thresholding

    Full text link
    In this paper, we consider the problem of signal recovery from 1-bit noisy measurements. We present an efficient method to obtain an estimation of the signal of interest when the measurements are corrupted by white or colored noise. To the best of our knowledge, the proposed framework is the pioneer effort in the area of 1-bit sampling and signal recovery in providing a unified framework to deal with the presence of noise with an arbitrary covariance matrix including that of the colored noise. The proposed method is based on a constrained quadratic program (CQP) formulation utilizing an adaptive quantization thresholding approach, that further enables us to accurately recover the signal of interest from its 1-bit noisy measurements. In addition, due to the adaptive nature of the proposed method, it can recover both fixed and time-varying parameters from their quantized 1-bit samples.Comment: This is a pre-print version of the original conference paper that has been accepted at the 2018 IEEE Asilomar Conference on Signals, Systems, and Computer

    Joint Estimation and Localization in Sensor Networks

    Get PDF
    This paper addresses the problem of collaborative tracking of dynamic targets in wireless sensor networks. A novel distributed linear estimator, which is a version of a distributed Kalman filter, is derived. We prove that the filter is mean square consistent in the case of static target estimation. When large sensor networks are deployed, it is common that the sensors do not have good knowledge of their locations, which affects the target estimation procedure. Unlike most existing approaches for target tracking, we investigate the performance of our filter when the sensor poses need to be estimated by an auxiliary localization procedure. The sensors are localized via a distributed Jacobi algorithm from noisy relative measurements. We prove strong convergence guarantees for the localization method and in turn for the joint localization and target estimation approach. The performance of our algorithms is demonstrated in simulation on environmental monitoring and target tracking tasks.Comment: 9 pages (two-column); 5 figures; Manuscript submitted to the 2014 IEEE Conference on Decision and Control (CDC
    • …
    corecore