9 research outputs found

    Distributed Hypothesis Testing with Privacy Constraints

    Full text link
    We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example

    Some Results on the Vector Gaussian Hypothesis Testing Problem

    Full text link
    This paper studies the problem of discriminating two multivariate Gaussian distributions in a distributed manner. Specifically, it characterizes in a special case the optimal typeII error exponent as a function of the available communication rate. As a side-result, the paper also presents the optimal type-II error exponent of a slight generalization of the hypothesis testing against conditional independence problem where the marginal distributions under the two hypotheses can be different.Comment: To appear in 2020 IEEE International Symposium on Information Theory, ISIT'2

    Exponent Trade-off for Hypothesis Testing Over Noisy Channels

    Get PDF
    International audienceThe distributed hypothesis testing (DHT) problem is considered, in which the joint distribution of a pair of sequences present at separated terminals, is governed by one of two possible hypotheses. The decision needs to be made by one of the terminals (the "decoder"). The other terminal (the "encoder") uses a noisy channel in order to help the decoder with the decision. This problem can be seen as a generalization of the side-information variant of the DHT problem, where the rate-limited link is replaced by a noisy channel. A recent work by Salehkalaibar and Wigger has derived an achievable Stein exponent for this problem, by employing concepts from the DHT scheme of Shimokawa et al., and from unequal error protection coding for a single special message. In this work we extend the view to a trade-off between the two error exponents, additionally building on multiple codebooks and two special messages with unequal error protection. As a by product, we also present an achievable exponent trade-off for a rate-limited link, which generalizes Shimokawa et al.

    On the Reliability Function of Distributed Hypothesis Testing Under Optimal Detection

    Full text link
    The distributed hypothesis testing problem with full side-information is studied. The trade-off (reliability function) between the two types of error exponents under limited rate is studied in the following way. First, the problem is reduced to the problem of determining the reliability function of channel codes designed for detection (in analogy to a similar result which connects the reliability function of distributed lossless compression and ordinary channel codes). Second, a single-letter random-coding bound based on a hierarchical ensemble, as well as a single-letter expurgated bound, are derived for the reliability of channel-detection codes. Both bounds are derived for a system which employs the optimal detection rule. We conjecture that the resulting random-coding bound is ensemble-tight, and consequently optimal within the class of quantization-and-binning schemes

    Distributed Hypothesis Testing over a Noisy Channel: Error-exponents Trade-off

    Get PDF
    A two-terminal distributed binary hypothesis testing (HT) problem over a noisy channel is studied. The two terminals, called the observer and the decision maker, each has access to nn independent and identically distributed samples, denoted by U\mathbf{U} and V\mathbf{V}, respectively. The observer communicates to the decision maker over a discrete memoryless channel (DMC), and the decision maker performs a binary hypothesis test on the joint probability distribution of (U,V)(\mathbf{U},\mathbf{V}) based on V\mathbf{V} and the noisy information received from the observer. The trade-off between the exponents of the type I and type II error probabilities in HT is investigated. Two inner bounds are obtained, one using a separation-based scheme that involves type-based compression and unequal error-protection channel coding, and the other using a joint scheme that incorporates type-based hybrid coding. The separation-based scheme is shown to recover the inner bound obtained by Han and Kobayashi for the special case of a rate-limited noiseless channel, and also the one obtained by the authors previously for a corner point of the trade-off. Exact single-letter characterization of the optimal trade-off is established for the special case of testing for the marginal distribution of U\mathbf{U}, when V\mathbf{V} is unavailable. Our results imply that a separation holds in this case, in the sense that the optimal trade-off is achieved by a scheme that performs independent HT and channel coding. Finally, we show via an example that the joint scheme achieves a strictly tighter bound than the separation-based scheme for some points of the error-exponent trade-off

    Distributed hypothesis testing over noisy channels

    Get PDF
    A distributed binary hypothesis testing problem, in which multiple observers transmit their observations to a detector over noisy channels, is studied. Together with its own observations, the goal of the detector is to decide between two hypotheses for the joint distribution of the data. Single-letter upper and lower bounds on the optimal type 2 error exponent (T2-EE), when the type 1 error probability vanishes with the block-length are obtained. These bounds coincide and characterize the optimal T2-EE when only a single helper is involved. Our result shows that the optimal T2-EE depends on the marginal distributions of the data and the channels rather than their joint distribution. However, an operational separation between HT and channel coding does not hold, and the optimal T2-EE is achieved by generating channel inputs correlated with observed data
    corecore