8 research outputs found

    Submodularity and Optimality of Fusion Rules in Balanced Binary Relay Trees

    Full text link
    We study the distributed detection problem in a balanced binary relay tree, where the leaves of the tree are sensors generating binary messages. The root of the tree is a fusion center that makes the overall decision. Every other node in the tree is a fusion node that fuses two binary messages from its child nodes into a new binary message and sends it to the parent node at the next level. We assume that the fusion nodes at the same level use the same fusion rule. We call a string of fusion rules used at different levels a fusion strategy. We consider the problem of finding a fusion strategy that maximizes the reduction in the total error probability between the sensors and the fusion center. We formulate this problem as a deterministic dynamic program and express the solution in terms of Bellman's equations. We introduce the notion of stringsubmodularity and show that the reduction in the total error probability is a stringsubmodular function. Consequentially, we show that the greedy strategy, which only maximizes the level-wise reduction in the total error probability, is within a factor of the optimal strategy in terms of reduction in the total error probability

    Detection Performance in Balanced Binary Relay Trees with Node and Link Failures

    Full text link
    We study the distributed detection problem in the context of a balanced binary relay tree, where the leaves of the tree correspond to NN identical and independent sensors generating binary messages. The root of the tree is a fusion center making an overall decision. Every other node is a relay node that aggregates the messages received from its child nodes into a new message and sends it up toward the fusion center. We derive upper and lower bounds for the total error probability PNP_N as explicit functions of NN in the case where nodes and links fail with certain probabilities. These characterize the asymptotic decay rate of the total error probability as NN goes to infinity. Naturally, this decay rate is not larger than that in the non-failure case, which is N\sqrt N. However, we derive an explicit necessary and sufficient condition on the decay rate of the local failure probabilities pkp_k (combination of node and link failure probabilities at each level) such that the decay rate of the total error probability in the failure case is the same as that of the non-failure case. More precisely, we show that logPN1=Θ(N)\log P_N^{-1}=\Theta(\sqrt N) if and only if logpk1=Ω(2k/2)\log p_k^{-1}=\Omega(2^{k/2})

    Beliefs in Decision-Making Cascades

    Full text link
    This work explores a social learning problem with agents having nonidentical noise variances and mismatched beliefs. We consider an NN-agent binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on preceding agents' decisions. In addition, the agents have their own beliefs instead of the true prior, and have nonidentical noise variances in the private signal. We focus on the Bayes risk of the last agent, where preceding agents are selfish. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The effect of nonidentical noise levels in the two-agent case is also considered and analytical properties of the optimal belief curves are given. Next, we consider a predecessor selection problem wherein the subsequent agent of a certain belief chooses a predecessor from a set of candidates with varying beliefs. We characterize the decision region for choosing such a predecessor and argue that a subsequent agent with beliefs varying from the true prior often ends up selecting a suboptimal predecessor, indicating the need for a social planner. Lastly, we discuss an augmented intelligence design problem that uses a model of human behavior from cumulative prospect theory and investigate its near-optimality and suboptimality.Comment: final version, to appear in IEEE Transactions on Signal Processin

    Data Fusion Trees for Detection: Does Architecture Matter?

    Full text link

    Quantized Consensus by the Alternating Direction Method of Multipliers: Algorithms and Applications

    Get PDF
    Collaborative in-network processing is a major tenet in the fields of control, signal processing, information theory, and computer science. Agents operating in a coordinated fashion can gain greater efficiency and operational capability than those perform solo missions. In many such applications the central task is to compute the global average of agents\u27 data in a distributed manner. Much recent attention has been devoted to quantized consensus, where, due to practical constraints, only quantized communications are allowed between neighboring nodes in order to achieve the average consensus. This dissertation aims to develop efficient quantized consensus algorithms based on the alternating direction method of multipliers (ADMM) for networked applications, and in particular, consensus based detection in large scale sensor networks. We study the effects of two commonly used uniform quantization schemes, dithered and deterministic quantizations, on an ADMM based distributed averaging algorithm. With dithered quantization, this algorithm yields linear convergence to the desired average in the mean sense with a bounded variance. When deterministic quantization is employed, the distributed ADMM either converges to a consensus or cycles with a finite period after a finite-time iteration. In the cyclic case, local quantized variables have the same sample mean over one period and hence each node can also reach a consensus. We then obtain an upper bound on the consensus error, which depends only on the quantization resolution and the average degree of the network. This is preferred in large scale networks where the range of agents\u27 data and the size of network may be large. Noticing that existing quantized consensus algorithms, including the above two, adopt infinite-bit quantizers unless a bound on agents\u27 data is known a priori, we further develop an ADMM based quantized consensus algorithm using finite-bit bounded quantizers for possibly unbounded agents\u27 data. By picking a small enough ADMM step size, this algorithm can obtain the same consensus result as using the unbounded deterministic quantizer. We then apply this algorithm to distributed detection in connected sensor networks where each node can only exchange information with its direct neighbors. We establish that, with each node employing an identical one-bit quantizer for local information exchange, our approach achieves the optimal asymptotic performance of centralized detection. The statement is true under three different detection frameworks: the Bayesian criterion where the maximum a posteriori detector is optimal, the Neyman-Pearson criterion with a constant type-I error constraint, and the Neyman-Pearson criterion with an exponential type-I error constraint. The key to achieving optimal asymptotic performance is the use of a one-bit deterministic quantizer with controllable threshold that results in desired consensus error bounds

    The Unlucky broker

    Get PDF
    2010 - 2011This dissertation collects results of the work on the interpretation, characteri- zation and quanti cation of a novel topic in the eld of detection theory -the Unlucky Broker problem-, and its asymptotic extension. The same problem can be also applied to the context of Wireless Sensor Networks (WSNs). Suppose that a WSN is engaged in a binary detection task. Each node of the system collects measurements about the state of the nature (H0 or H1) to be discovered. A common fusion center receives the observations from the sensors and implements an optimal test (for example in the Bayesian sense), exploiting its knowledge of the a-priori probabilities of the hypotheses. Later, the priors used in the test are revealed to be inaccurate and a rened pair is made available. Unfortunately, at that time, only a subset of the original data is still available, along with the original decision. In the thesis, we formulate the problem in statistical terms and we consider a system made of n sensors engaged in a binary detection task. A successive reduction of data set's cardinality occurs and multiple re nements are required. The sensors are devices programmed to take the decision from the previous node in the chain and the available data, implement some simple test to decide between the hypotheses, and forward the resulting decision to the next node. The rst part of the thesis shows that the optimal test is very di cult to be implemented even with only two nodes (the unlucky broker problem), because of the strong correlation between the available data and the decision coming from the previous node. Then, to make the designed detector implementable in practice and to ensure analytical tractability, we consider suboptimal local tests. We choose a simple local decision strategy, following the rationale ruling the optimal detector solving the unlucky broker problem: A decision in favor of H0 is always retained by the current node, while when the decision of the previous node is in favor of H1, a local log-likelihood based test is implemented. The main result is that, asymptotically, if we set the false alarm probability of the rst node (the one observing the full data set) the false alarm probability decreases along the chain and it is non zero at the last stage. Moreover, very surprisingly, the miss detection probability decays exponentially fast with the root square of the number of nodes and we provide its closed-form exponent, by exploiting tools from random processes and information theory. [edited by the author]X n.s
    corecore