5,988 research outputs found
Quantization of Prior Probabilities for Hypothesis Testing
Bayesian hypothesis testing is investigated when the prior probabilities of
the hypotheses, taken as a random vector, are quantized. Nearest neighbor and
centroid conditions are derived using mean Bayes risk error as a distortion
measure for quantization. A high-resolution approximation to the
distortion-rate function is also obtained. Human decision making in segregated
populations is studied assuming Bayesian hypothesis testing with quantized
priors
Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing
This paper studies the quantization of prior probabilities, drawn from an
ensemble, for distributed detection and data fusion. Design and performance
equivalences between a team of N agents tied by a fixed fusion rule and a more
powerful single agent are obtained. Effects of identical quantization and
diverse quantization are compared. Consideration of perceived common risk
enables agents using diverse quantizers to collaborate in hypothesis testing,
and it is proven that the minimum mean Bayes risk error is achieved by diverse
quantization. The comparison shows that optimal diverse quantization with K
cells per quantizer performs as well as optimal identical quantization with
N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes
risk error as the distortion criterion.Comment: 11 page
On optimal quantization rules for some problems in sequential decentralized detection
We consider the design of systems for sequential decentralized detection, a
problem that entails several interdependent choices: the choice of a stopping
rule (specifying the sample size), a global decision function (a choice between
two competing hypotheses), and a set of quantization rules (the local decisions
on the basis of which the global decision is made). This paper addresses an
open problem of whether in the Bayesian formulation of sequential decentralized
detection, optimal local decision functions can be found within the class of
stationary rules. We develop an asymptotic approximation to the optimal cost of
stationary quantization rules and exploit this approximation to show that
stationary quantizers are not optimal in a broad class of settings. We also
consider the class of blockwise stationary quantizers, and show that
asymptotically optimal quantizers are likelihood-based threshold rules.Comment: Published as IEEE Transactions on Information Theory, Vol. 54(7),
3285-3295, 200
- …