16 research outputs found
Quantization of Prior Probabilities for Hypothesis Testing
Bayesian hypothesis testing is investigated when the prior probabilities of
the hypotheses, taken as a random vector, are quantized. Nearest neighbor and
centroid conditions are derived using mean Bayes risk error as a distortion
measure for quantization. A high-resolution approximation to the
distortion-rate function is also obtained. Human decision making in segregated
populations is studied assuming Bayesian hypothesis testing with quantized
priors
Theoretical Bounds in Minimax Decentralized Hypothesis Testing
Minimax decentralized detection is studied under two scenarios: with and
without a fusion center when the source of uncertainty is the Bayesian prior.
When there is no fusion center, the constraints in the network design are
determined. Both for a single decision maker and multiple decision makers, the
maximum loss in detection performance due to minimax decision making is
obtained. In the presence of a fusion center, the maximum loss of detection
performance between with- and without fusion center networks is derived
assuming that both networks are minimax robust. The results are finally
generalized.Comment: Submitted to IEEE Trans. on Signal Processin
Designing Discontinuities
Discontinuities can be fairly arbitrary but also cause a significant impact
on outcomes in social systems. Indeed, their arbitrariness is why they have
been used to infer causal relationships among variables in numerous settings.
Regression discontinuity from econometrics assumes the existence of a
discontinuous variable that splits the population into distinct partitions to
estimate the causal effects of a given phenomenon. Here we consider the design
of partitions for a given discontinuous variable to optimize a certain effect
previously studied using regression discontinuity. To do so, we propose a
quantization-theoretic approach to optimize the effect of interest, first
learning the causal effect size of a given discontinuous variable and then
applying dynamic programming for optimal quantization design of discontinuities
that balance the gain and loss in the effect size. We also develop a
computationally-efficient reinforcement learning algorithm for the dynamic
programming formulation of optimal quantization. We demonstrate our approach by
designing optimal time zone borders for counterfactuals of social capital,
social mobility, and health. This is based on regression discontinuity analyses
we perform on novel data, which may be of independent empirical interest in
showing a causal relationship between sunset time and social capital.Comment: A short version is accepted in Neural Compression ICML Worksop July
19th, 202
Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing
This paper studies the quantization of prior probabilities, drawn from an
ensemble, for distributed detection and data fusion. Design and performance
equivalences between a team of N agents tied by a fixed fusion rule and a more
powerful single agent are obtained. Effects of identical quantization and
diverse quantization are compared. Consideration of perceived common risk
enables agents using diverse quantizers to collaborate in hypothesis testing,
and it is proven that the minimum mean Bayes risk error is achieved by diverse
quantization. The comparison shows that optimal diverse quantization with K
cells per quantizer performs as well as optimal identical quantization with
N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes
risk error as the distortion criterion.Comment: 11 page
Unequal a priori Probability Multiple Hypothesis Testing in Space Domain Awareness with the Space Surveillance Telescope
This paper investigates the ability to improve Space Domain Awareness (SDA) by increasing the number of detectable Resident Space Objects (RSOs) from space surveillance sensors. With matched filter based techniques, the expected impulse response, or Point Spread Function (PSF), is compared against the received data. In the situation where the images are spatially undersampled, the modeled PSF may not match the received data if the RSO does not fall in the center of the pixel. This aliasing can be accounted for with a Multiple Hypothesis Test (MHT). Previously, proposed MHTs have implemented a test with an equal a priori prior probability assumption. This paper investigates using an unequal a priori probability MHT. To determine accurate a priori probabilities, three metrics are computed; they are correlation, physical distance, and empirical. Using the calculated a priori probabilities, a new algorithm is developed, and images from the Space Surveillance Telescope (SST) are analyzed. The number of detected objects by both an equal and unequal prior probabilities are compared while keeping the false alarm rate constant. Any additional number of detected objects will help improve SDA capabilities. Abstract © 2016 Optical Society of Americ