40 research outputs found

    Mean Estimation from Adaptive One-bit Measurements

    Full text link
    We consider the problem of estimating the mean of a normal distribution under the following constraint: the estimator can access only a single bit from each sample from this distribution. We study the squared error risk in this estimation as a function of the number of samples and one-bit measurements nn. We consider an adaptive estimation setting where the single-bit sent at step nn is a function of both the new sample and the previous n1n-1 acquired bits. For this setting, we show that no estimator can attain asymptotic mean squared error smaller than π/(2n)+O(n2)\pi/(2n)+O(n^{-2}) times the variance. In other words, one-bit restriction increases the number of samples required for a prescribed accuracy of estimation by a factor of at least π/2\pi/2 compared to the unrestricted case. In addition, we provide an explicit estimator that attains this asymptotic error, showing that, rather surprisingly, only π/2\pi/2 times more samples are required in order to attain estimation performance equivalent to the unrestricted case

    Hypothesis testing via a comparator

    Get PDF
    This paper investigates the best achievable performance by a hypothesis test satisfying a structural constraint: two functions are computed at two different terminals and the detector consists of a simple comparator verifying whether the functions agree. Such tests arise as part of study of fundamental limits of channel coding, but are also useful in other contexts. A simple expression for the Stein exponent is found and applied to showing a strong converse in the problem of multi-terminal hypothesis testing with rate constraints. Connections to the Gács-Körner common information and to spectral properties of conditional expectation operator are identified. Further tightening of results hinges on finding λ-blocks of minimal weight. Application of Delsarte's linear programming method to this problem is described.Center for Science of Information (Grant Agreement CCF-09-39370

    Some Results on the Vector Gaussian Hypothesis Testing Problem

    Full text link
    This paper studies the problem of discriminating two multivariate Gaussian distributions in a distributed manner. Specifically, it characterizes in a special case the optimal typeII error exponent as a function of the available communication rate. As a side-result, the paper also presents the optimal type-II error exponent of a slight generalization of the hypothesis testing against conditional independence problem where the marginal distributions under the two hypotheses can be different.Comment: To appear in 2020 IEEE International Symposium on Information Theory, ISIT'2

    Distributed Hypothesis Testing with Privacy Constraints

    Full text link
    We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example

    A Strong Data Processing Inequality for Thinning Poisson Processes and Some Applications

    Get PDF
    International audienceThis paper derives a simple strong data processing inequality (DPI) for Poisson processes: after a Poisson process is passed through p-thinning—in which every arrival remains in the process with probability p and is erased otherwise, independently of the other points—the mutual information between the Poisson process and any other random variable is reduced to no more than p times its original value. This strong DPI is applied to prove tight converse bounds in several problems: a hypothesis test with communication constraints, a mutual information game, and a CEO problem
    corecore