5 research outputs found
Beliefs in Decision-Making Cascades
This work explores a social learning problem with agents having nonidentical
noise variances and mismatched beliefs. We consider an -agent binary
hypothesis test in which each agent sequentially makes a decision based not
only on a private observation, but also on preceding agents' decisions. In
addition, the agents have their own beliefs instead of the true prior, and have
nonidentical noise variances in the private signal. We focus on the Bayes risk
of the last agent, where preceding agents are selfish.
We first derive the optimal decision rule by recursive belief update and
conclude, counterintuitively, that beliefs deviating from the true prior could
be optimal in this setting. The effect of nonidentical noise levels in the
two-agent case is also considered and analytical properties of the optimal
belief curves are given. Next, we consider a predecessor selection problem
wherein the subsequent agent of a certain belief chooses a predecessor from a
set of candidates with varying beliefs. We characterize the decision region for
choosing such a predecessor and argue that a subsequent agent with beliefs
varying from the true prior often ends up selecting a suboptimal predecessor,
indicating the need for a social planner. Lastly, we discuss an augmented
intelligence design problem that uses a model of human behavior from cumulative
prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin
Towards the Design of Prospect-Theory Based Human Decision Rules for Hypothesis Testing
Detection rules have traditionally been designed for rational agents that minimize the Bayes risk (average decision cost). With the advent of crowd-sensing systems, there is a need to redesign binary hypothesis testing rules for behavioral agents, whose cognitive behavior is not captured by traditional utility functions such as Bayes risk. In this paper, we adopt prospect theory based models for decision makers. We consider special agent models namely optimists and pessimists in this paper, and derive optimal detection rules under different scenarios. Using an illustrative example, we also show how the decision rule of a human agent deviates from the Bayesian decision rule under various behavioral models, considered in this paper
Towards the Design of Prospect-Theory Based Human Decision Rules for Hypothesis Testing
Detection rules have traditionally been designed for rational agents that minimize the Bayes risk (average decision cost). With the advent of crowd-sensing systems, there is a need to redesign binary hypothesis testing rules for behavioral agents, whose cognitive behavior is not captured by traditional utility functions such as Bayes risk. In this paper, we adopt prospect theory based models for decision makers. We consider special agent models namely optimists and pessimists in this paper, and derive optimal detection rules under different scenarios. Using an illustrative example, we also show how the decision rule of a human agent deviates from the Bayesian decision rule under various behavioral models, considered in this paper
Information-theoretic analysis of human-machine mixed systems
Many recent information technologies such as crowdsourcing and social decision-making systems are designed based on (near-)optimal information processing techniques for machines. However, in such applications, some parts of systems that process information are humans and so systems are affected by bounded rationality of human behavior and overall performance is suboptimal. In this dissertation, we consider systems that include humans and study their information-theoretic limits. We investigate four problems in this direction and show fundamental limits in terms of capacity, Bayes risk, and rate-distortion.
A system with queue-length-dependent service quality, motivated by crowdsourcing platforms, is investigated. Since human service quality changes depending on workload, a job designer must take the level of work into account. We model the workload using queueing theory and characterize Shannon's information capacity for single-user and multiuser systems.
We also investigate social learning as sequential binary hypothesis testing. We find somewhat counterintuitively that unlike basic binary hypothesis testing, the decision threshold determined by the true prior probability is no longer optimal and biased perception of the true prior could outperform the unbiased perception system. The fact that the optimal belief curve resembles the Prelec weighting function from cumulative prospect theory gives insight, in the era of artificial intelligence (AI), into how to design machine AI that supports a human decision.
The traditional CEO problem well models a collaborative decision-making problem. We extend the CEO problem to two continuous alphabet settings with general rth power of difference and logarithmic distortions, and study matching asymptotics of distortion as the number of agents and sum rate grow without bound