2,059 research outputs found

    Minimum mean bayes risk error quantization of prior probabilities

    Full text link
    Bayesian hypothesis testing is investigated when the prior probabili-ties of the hypotheses, taken as a random vector, must be quantized. Nearest neighbor and centroid conditions for quantizer optimality are derived using mean Bayes risk error as a distortion measure. An example of optimal quantization for hypothesis testing is provided. Human decision making is briefly studied assuming quantized prior Bayesian hypothesis testing; this model explains several experimen-tal findings. Index Terms — quantization, categorization, Bayesian hypoth-esis testing, signal detection, Bayes risk erro

    Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing

    Full text link
    This paper studies the quantization of prior probabilities, drawn from an ensemble, for distributed detection and data fusion. Design and performance equivalences between a team of N agents tied by a fixed fusion rule and a more powerful single agent are obtained. Effects of identical quantization and diverse quantization are compared. Consideration of perceived common risk enables agents using diverse quantizers to collaborate in hypothesis testing, and it is proven that the minimum mean Bayes risk error is achieved by diverse quantization. The comparison shows that optimal diverse quantization with K cells per quantizer performs as well as optimal identical quantization with N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes risk error as the distortion criterion.Comment: 11 page

    Quantization of Prior Probabilities for Hypothesis Testing

    Full text link
    Bayesian hypothesis testing is investigated when the prior probabilities of the hypotheses, taken as a random vector, are quantized. Nearest neighbor and centroid conditions are derived using mean Bayes risk error as a distortion measure for quantization. A high-resolution approximation to the distortion-rate function is also obtained. Human decision making in segregated populations is studied assuming Bayesian hypothesis testing with quantized priors

    Beliefs in Decision-Making Cascades

    Full text link
    This work explores a social learning problem with agents having nonidentical noise variances and mismatched beliefs. We consider an NN-agent binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on preceding agents' decisions. In addition, the agents have their own beliefs instead of the true prior, and have nonidentical noise variances in the private signal. We focus on the Bayes risk of the last agent, where preceding agents are selfish. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The effect of nonidentical noise levels in the two-agent case is also considered and analytical properties of the optimal belief curves are given. Next, we consider a predecessor selection problem wherein the subsequent agent of a certain belief chooses a predecessor from a set of candidates with varying beliefs. We characterize the decision region for choosing such a predecessor and argue that a subsequent agent with beliefs varying from the true prior often ends up selecting a suboptimal predecessor, indicating the need for a social planner. Lastly, we discuss an augmented intelligence design problem that uses a model of human behavior from cumulative prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin

    Beliefs and expertise in sequential decision making

    Full text link
    This work explores a sequential decision making problem with agents having diverse expertise and mismatched beliefs. We consider an N-agent sequential binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on previous agents’ decisions. In addition, the agents have their own beliefs instead of the true prior, and have varying expertise in terms of the noise variance in the private signal. We focus on the risk of the last-acting agent, where precedent agents are selfish. Thus, we call this advisor(s)-advisee sequential decision making. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The impact of diverse noise levels (which means diverse expertise levels) in the two-agent case is also considered and the analytical properties of the optimal belief curves are given. These curves, for certain cases, resemble probability weighting functions from cumulative prospect theory, and so we also discuss the choice of Prelec weighting functions as an approximation for the optimal beliefs, and the possible psychophysical optimality of human beliefs. Next, we consider an advisor selection problem where in the advisee of a certain belief chooses an advisor from a set of candidates with varying beliefs. We characterize the decision region for choosing such an advisor and argue that an advisee with beliefs varying from the true prior often ends up selecting a suboptimal advisor, indicating the need for a social planner. We close with a discussion on the implications of the study toward designing artificial intelligence systems for augmenting human intelligence.https://arxiv.org/abs/1812.04419First author draf

    Keep Ballots Secret: On the Futility of Social Learning in Decision Making by Voting

    Full text link
    We show that social learning is not useful in a model of team binary decision making by voting, where each vote carries equal weight. Specifically, we consider Bayesian binary hypothesis testing where agents have any conditionally-independent observation distribution and their local decisions are fused by any L-out-of-N fusion rule. The agents make local decisions sequentially, with each allowed to use its own private signal and all precedent local decisions. Though social learning generally occurs in that precedent local decisions affect an agent's belief, optimal team performance is obtained when all precedent local decisions are ignored. Thus, social learning is futile, and secret ballots are optimal. This contrasts with typical studies of social learning because we include a fusion center rather than concentrating on the performance of the latest-acting agents

    Decentralized Detection With Correlated Gaussian Observations: Parallel And Tandem Networks With Two Sensors

    Get PDF
    Signal detection in cognitive radio involves the determination of presence or absence of a primary user signal so that the secondary user may opportunistically gain access when the spectrum is unoccupied. In decentralized sensing scheme, two or more secondary users sense the spectrum, process individual observation and then pass the quantized data to a fusion center where a decision with regard to which hypothesis being true, that is, a signal being present or absent, is made. In the second part of the thesis, we study Bayes error performance of two-sensor tandem network designed to detect the presence or absence of deterministic signals in correlated Gaussian noise. Hence, the correlation coefficient remains identical under both hypotheses. Specifically, we address the question of which sensor ought to serve as the fusion center for optimal detection performance. In the process of this query, we draw some inference parallel to the “Good, Bad and Ugly’’ signal regions formulated originally for the two-sensor one-bit-per-sensor parallel fusion network by Willet,et.al. In the tandem “Good” region, numerical results conclusively show that the strategy of placing better sensor, i.e the sensor with higher signal to noise ratio, serving as the fusion center is preferred for better detection performance. In the first part of thesis, we study the error performance in a parallel network consisting of two sensors. In the parallel configuration, each sensor quantizes it\u27s own observation into a single-bit and transmits them to the fusion center. At the fusion center, the performance of AND and OR rules are examined by assuming the observations at the two sensors are jointly Gaussian, with specific means, variances and correlation coefficient, under hypothesis H1, whereas the observations under H0 are still Gaussian with specific means and variances but are statistically independent. The optimum quantizers at each sensor are found by minimizing the probability of error at the fusion center. We use a genetic algorithm (GA) to find a sub-optimal solution. It was observed that, when prior probabilities of hypotheses are equal, AND performs at least as well as OR
    • …
    corecore