4 research outputs found

    Sensitivity Analysis for Binary Sampling Systems via Quantitative Fisher Information Lower Bounds

    Full text link
    This article addresses the sensitivity of sensor systems with minimal signal digitization complexity regarding the estimation of analog model parameters. Digital measurements are exclusively available in a hard-limited form, and the parameters of the analog received signals shall be inferred through efficient algorithms. As a benchmark, the achievable estimation accuracy is to be assessed based on theoretical error bounds. To this end, characterization of the parametric likelihood is required, which forms a challenge for multivariate binary distributions. In this context, we analyze the Fisher information matrix of the exponential family and derive a conservative approximation for arbitrary models. The conservative information matrix rests on a surrogate exponential family, defined by two equivalences to the real data-generating system. This probabilistic notion enables designing estimators that consistently achieve the sensitivity level defined by the inverse of the conservative information matrix without characterizing the distributions involved. For parameter estimation with multivariate binary samples, using an equivalent quadratic exponential distribution tames the computational complexity of the conservative information matrix such that a quantitative assessment of the achievable error level becomes tractable. We exploit this for the performance analysis concerning signal parameter estimation with an array of low-complexity binary sensors by examining the achievable sensitivity in comparison to an ideal system featuring receivers supporting data acquisition with infinite amplitude resolution. Additionally, we demonstrate data-driven sensitivity analysis through the presented framework by learning the guaranteed achievable performance when processing sensor data obtained with recursive binary sampling schemes as implemented in ΣΔ\Sigma\Delta-modulating analog-to-digital converters.Comment: Former title was: Fisher Information Lower Bounds with Applications in Hardware-Aware Nonlinear Signal Processin

    On Binary Distributed Hypothesis Testing

    Full text link
    We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d. doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond to different levels of correlation between the two source components, i.e., the crossover probability between the two. The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates. We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting, previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations. We then turn to the setting of symmetric rates for which we utilize Korner-Marton coding to generalize the results, with little degradation with respect to the performance with a one-sided constraint (side-information setting)

    Lower Bounds for Learning Distributions under Communication Constraints via Fisher Information

    Full text link
    We consider the problem of learning high-dimensional, nonparametric and structured (e.g. Gaussian) distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and can use kk bits to communicate its sample to a central processor. We consider three different models for communication. Under the independent model, each node communicates its sample to a central processor by independently encoding it into kk bits. Under the more general sequential or blackboard communication models, nodes can share information interactively but each node is restricted to write at most kk bits on the final transcript. We characterize the impact of the communication constraint kk on the minimax risk of estimating the underlying distribution under â„“2\ell^2 loss. We develop minimax lower bounds that apply in a unified way to many common statistical models and reveal that the impact of the communication constraint can be qualitatively different depending on the tail behavior of the score function associated with each model. A key ingredient in our proofs is a geometric characterization of Fisher information from quantized samples

    Geometric Lower Bounds for Distributed Parameter Estimation under Communication Constraints

    Full text link
    We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has kk bits to communicate its sample to a centralized processor which computes an estimate of a desired parameter. We develop lower bounds for the minimax risk of estimating the underlying parameter for a large class of losses and distributions. Our results show that under mild regularity conditions, the communication constraint reduces the effective sample size by a factor of dd when kk is small, where dd is the dimension of the estimated parameter. Furthermore, this penalty reduces at most exponentially with increasing kk, which is the case for some models, e.g., estimating high-dimensional distributions. For other models however, we show that the sample size reduction is re-mediated only linearly with increasing kk, e.g. when some sub-Gaussian structure is available. We apply our results to the distributed setting with product Bernoulli model, multinomial model, and dense/sparse Gaussian location models which recover or strengthen existing results. Our approach significantly deviates from existing approaches for developing information-theoretic lower bounds for communication-efficient estimation. We circumvent the need for strong data processing inequalities used in prior work and develop a geometric approach which builds on a new representation of the communication constraint. This approach allows us to strengthen and generalize existing results with simpler and more transparent proofs.Comment: Earlier versions (including the conference proceeding) of this paper had a mistake in the lower bound argument for blackboard communication protocols, and the current version (v3) fixes i
    corecore