5 research outputs found
Hypothesis Testing in Feedforward Networks with Broadcast Failures
Consider a countably infinite set of nodes, which sequentially make decisions
between two given hypotheses. Each node takes a measurement of the underlying
truth, observes the decisions from some immediate predecessors, and makes a
decision between the given hypotheses. We consider two classes of broadcast
failures: 1) each node broadcasts a decision to the other nodes, subject to
random erasure in the form of a binary erasure channel; 2) each node broadcasts
a randomly flipped decision to the other nodes in the form of a binary
symmetric channel. We are interested in whether there exists a decision
strategy consisting of a sequence of likelihood ratio tests such that the node
decisions converge in probability to the underlying truth. In both cases, we
show that if each node only learns from a bounded number of immediate
predecessors, then there does not exist a decision strategy such that the
decisions converge in probability to the underlying truth. However, in case 1,
we show that if each node learns from an unboundedly growing number of
predecessors, then the decisions converge in probability to the underlying
truth, even when the erasure probabilities converge to 1. We also derive the
convergence rate of the error probability. In case 2, we show that if each node
learns from all of its previous predecessors, then the decisions converge in
probability to the underlying truth when the flipping probabilities of the
binary symmetric channels are bounded away from 1/2. In the case where the
flipping probabilities converge to 1/2, we derive a necessary condition on the
convergence rate of the flipping probabilities such that the decisions still
converge to the underlying truth. We also explicitly characterize the
relationship between the convergence rate of the error probability and the
convergence rate of the flipping probabilities
IEEE Journal of Selected Topics In Signal Processing : Vol. 7, No. 5, October 2013
1. Feature Search in the Grassmanian in Online Reinforcement Learning / Shalabh Bhatnagar, Vivek S. Borkar, Prabuchandran K.J.
2. Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems / Sattar Vakili, Keqin Liu, Qin Zhao
3. Sequentiality and Adaptivity Gains in Active Hypothesis Testing / Mohammad Naghshvar, Tara Javidi
4. Multistage Adaptive Estimation of Sparse Signals / Dennis Wei, Alfred O. Hero
5. Hypothesis Testing in Feedforward Networks with Broadcast Failures / Zhenliang Zhang, et al.
6. Learning-Based Constraint Satisfaction with Sensing Restrictions / Alessandro Checco, Doughlas J. Leith
7. Distributed Energy-Aware Diffusion Least Mean Squares: game-theoretic learning / Omid Namvar Gharehshiran, Vikram Krishnamurthy
8. Distributed Learning and Multiaccess of On-Off Channels / Shiyao Chen, Lang Tong
9. Winning the Lottery: learning perfect coordination with minimal feedback / William Zame, Jie Xu, Michaela van der Schaar
10. Multiagent Reinforcement Learning Based Spectrum Sensing Policies for Cognitive Radio Networks / Jarmo Lunden, et al.
11. Opportunistic Spectrum Access by Exploiting Primary User Feedbacks in Underlay Cognitive Radio Systems: an optimaly analysis / Kehao Wang, Lin Chen, Quan Liu
12. Maximizing Quality of Information from Multiple Sensor Devices: the exploration vs exploitation tradeoff / Ertugul Necdet Ciftcioglu, Aylin Yener, Michael J. Neely
13. Transmit Power Control Policies for Energy Harvesting Sensors with Retransmissions / Anup Aprem, et al.
14. Robust Reputation Protocol Design for Online Communities: a stochastic stability analysis / Yu Zhang, Michaela van der Schaa