11 research outputs found
Bayesian Design of Tandem Networks for Distributed Detection With Multi-bit Sensor Decisions
We consider the problem of decentralized hypothesis testing under
communication constraints in a topology where several peripheral nodes are
arranged in tandem. Each node receives an observation and transmits a message
to its successor, and the last node then decides which hypothesis is true. We
assume that the observations at different nodes are, conditioned on the true
hypothesis, independent and the channel between any two successive nodes is
considered error-free but rate-constrained. We propose a cyclic numerical
design algorithm for the design of nodes using a person-by-person methodology
with the minimum expected error probability as a design criterion, where the
number of communicated messages is not necessarily equal to the number of
hypotheses. The number of peripheral nodes in the proposed method is in
principle arbitrary and the information rate constraints are satisfied by
quantizing the input of each node. The performance of the proposed method for
different information rate constraints, in a binary hypothesis test, is
compared to the optimum rate-one solution due to Swaszek and a method proposed
by Cover, and it is shown numerically that increasing the channel rate can
significantly enhance the performance of the tandem network. Simulation results
for -ary hypothesis tests also show that by increasing the channel rates the
performance of the tandem network significantly improves
Hypothesis Testing in Feedforward Networks with Broadcast Failures
Consider a countably infinite set of nodes, which sequentially make decisions
between two given hypotheses. Each node takes a measurement of the underlying
truth, observes the decisions from some immediate predecessors, and makes a
decision between the given hypotheses. We consider two classes of broadcast
failures: 1) each node broadcasts a decision to the other nodes, subject to
random erasure in the form of a binary erasure channel; 2) each node broadcasts
a randomly flipped decision to the other nodes in the form of a binary
symmetric channel. We are interested in whether there exists a decision
strategy consisting of a sequence of likelihood ratio tests such that the node
decisions converge in probability to the underlying truth. In both cases, we
show that if each node only learns from a bounded number of immediate
predecessors, then there does not exist a decision strategy such that the
decisions converge in probability to the underlying truth. However, in case 1,
we show that if each node learns from an unboundedly growing number of
predecessors, then the decisions converge in probability to the underlying
truth, even when the erasure probabilities converge to 1. We also derive the
convergence rate of the error probability. In case 2, we show that if each node
learns from all of its previous predecessors, then the decisions converge in
probability to the underlying truth when the flipping probabilities of the
binary symmetric channels are bounded away from 1/2. In the case where the
flipping probabilities converge to 1/2, we derive a necessary condition on the
convergence rate of the flipping probabilities such that the decisions still
converge to the underlying truth. We also explicitly characterize the
relationship between the convergence rate of the error probability and the
convergence rate of the flipping probabilities
Stochastic Streams: Sample Complexity vs. Space Complexity
We address the trade-off between the computational resources needed to process a large data set and the number of samples available from the data set. Specifically, we consider the following abstraction: we receive a potentially infinite stream of IID samples from some unknown distribution D, and are tasked with computing some function f(D). If the stream is observed for time t, how much memory, s, is required to estimate f(D)? We refer to t as the sample complexity and s as the space complexity. The main focus of this paper is investigating the trade-offs between the space and sample complexity. We study these trade-offs for several canonical problems studied in the data stream model: estimating the collision probability, i.e., the second moment of a distribution, deciding if a graph is connected, and approximating the dimension of an unknown subspace. Our results are based on techniques for simulating different classical sampling procedures in this model, emulating random walks given a sequence of IID samples, as well as leveraging a characterization between communication bounded protocols and statistical query algorithms
Distributed detection by a large team of sensors in tandem
Cover title. "This paper has been submitted for publication in the IEEE Transactions on Aerospace and Electronic Systems"--Cover.Includes bibliographical references (p. 17-19).Research was supported by the National Science Foundation (under a subcontract from the University of Connecticut) NSF/IRI-8902755 Research was supported by the Office of Naval Research. ONR/N00014-84-K-0519Jason D. Papastavrou and Michael Athans
Decentralized detection
Cover title. "To appear in Advances in Statistical Signal Processing, Vol. 2: Signal Detection, H.V. Poor and J.B. Thomas, Editors."--Cover.Includes bibliographical references (p. 40-43).Research supported by the ONR. N00014-84-K-0519 (NR 649-003) Research supported by the ARO. DAAL03-86-K-0171John N. Tsitsiklis
On decision making in tandem networks
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009."September 2009." Cataloged from PDF version of thesis.Includes bibliographical references (p. 81-82).We study the convergence of Bayesian learning in a tandem social network. Each agent receives a noisy signal about the underlying state of the world, and observes her predecessor's action before choosing her own. We characterize the conditions under which, as the network grows larger, agents' beliefs converge to the true state of the world. The literature has predominantly focused on the case where the number of possible actions is equal to that of alternative states. We examine the case where agents pick three-valued actions to learn one of two possible states of the world. We focus on myopic strategies, and distinguish between learning in probability and learning almost surely. We show that ternary actions are not sufficient to achieve learning (almost sure or in probability) when the likelihood ratios of the private signals are bounded. When the private signals can be arbitrarily informative (unbounded likelihood ratios), we show that there is learning, in probability. Finally, we report an experimental test of how individuals learn from the behavior of others. We explore sequential decision making in a game of three players, where each decision maker observes her immediate predecessor's binary or ternary action. Our experimental design uses Amazon Mechanical Turk, and is based on a setup with continuous signals, discrete actions and a cutoff elicitation technique introduced in [QK05). We replicate the findings of the experimental economics literature on observational learning in the binary action case and use them as a benchmark. We find that herds are less frequent when subjects use three actions instead of two. In addition, our results suggest that with ternary actions, behavior in the laboratory is less consistent with the predictions of Bayesian behavior than with binary actions.by Manal Dia.M.Eng
Observational learning with finite memory
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 113-114).We study a model of sequential decision making under uncertainty by a population of agents. Each agent prior to making a decision receives a private signal regarding a binary underlying state of the world. Moreover she observes the actions of her last K immediate predecessors. We discriminate between the cases of bounded and unbounded informativeness of private signals. In contrast to the literature that typically assumes myopic agents who choose the action that maximizes the probability of making the correct decision (the decision that identifies correctly the underlying state), in our model we assume that agents are forward looking, maximizing the discounted sum of the probabilities of a correct decision from all the future agents including theirs. Therefore, an agent when making a decision takes into account the impact that this decision will have on the subsequent agents. We investigate whether in a Perfect Bayesian Equilibrium of this model individual's decisions converge to the correct state of the world, in probability, and we show that this cannot happen for any K and any discount factor if private signals' informativeness is bounded. As a benchmark, we analyze the design limits associated with this problem, which entail constructing decision profiles that dictate each agent's action as a function of her information set, given by her private signal and the last K decisions. We investigate the case of bounded informativeness of the private signals. We answer the question whether there exists a decision profile that results in agents' actions converging to the correct state of the world, a property that we call learning. We first study almost sure learning and prove that it is impossible under any decision rule. We then explore learning in probability, where a dichotomy arises. Specifically, if K = 1 we show that learning in probability is impossible under any decision rule, while for K > 2 we design a decision rule that achieves it.by Kimon Drakopoulos.S.M