267,046 research outputs found

    On Power Allocation for Distributed Detection with Correlated Observations and Linear Fusion

    Full text link
    We consider a binary hypothesis testing problem in an inhomogeneous wireless sensor network, where a fusion center (FC) makes a global decision on the underlying hypothesis. We assume sensors observations are correlated Gaussian and sensors are unaware of this correlation when making decisions. Sensors send their modulated decisions over fading channels, subject to individual and/or total transmit power constraints. For parallel-access channel (PAC) and multiple-access channel (MAC) models, we derive modified deflection coefficient (MDC) of the test statistic at the FC with coherent reception.We propose a transmit power allocation scheme, which maximizes MDC of the test statistic, under three different sets of transmit power constraints: total power constraint, individual and total power constraints, individual power constraints only. When analytical solutions to our constrained optimization problems are elusive, we discuss how these problems can be converted to convex ones. We study how correlation among sensors observations, reliability of local decisions, communication channel model and channel qualities and transmit power constraints affect the reliability of the global decision and power allocation of inhomogeneous sensors

    Distributed Detection in Sensor Networks with Limited Range Sensors

    Full text link
    We consider a multi-object detection problem over a sensor network (SNET) with limited range sensors. This problem complements the widely considered decentralized detection problem where all sensors observe the same object. While the necessity for global collaboration is clear in the decentralized detection problem, the benefits of collaboration with limited range sensors is unclear and has not been widely explored. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR). We first extend the FDR procedure and develop a transformation that exploits complete or partial knowledge of either the observed distributions at each sensor or the ensemble (mixture) distribution across all sensors. We then show that this transformation applies to multi-dimensional observations, thus extending FDR to multi-dimensional settings. We also extend FDR theory to cases where distributions under both null and positive hypotheses are uncertain. We then propose a robust distributed algorithm to perform detection. We further demonstrate scalability to large SNETs by showing that the upper bound on the communication complexity scales linearly with the number of sensors that are in the vicinity of objects and is independent of the total number of sensors. Finally, we deal with situations where the sensing model may be uncertain and establish robustness of our techniques to such uncertainties.Comment: Submitted to IEEE Transactions on Signal Processin

    Bayesian Cluster Enumeration Criterion for Unsupervised Learning

    Full text link
    We derive a new Bayesian Information Criterion (BIC) by formulating the problem of estimating the number of clusters in an observed data set as maximization of the posterior probability of the candidate models. Given that some mild assumptions are satisfied, we provide a general BIC expression for a broad class of data distributions. This serves as a starting point when deriving the BIC for specific distributions. Along this line, we provide a closed-form BIC expression for multivariate Gaussian distributed variables. We show that incorporating the data structure of the clustering problem into the derivation of the BIC results in an expression whose penalty term is different from that of the original BIC. We propose a two-step cluster enumeration algorithm. First, a model-based unsupervised learning algorithm partitions the data according to a given set of candidate models. Subsequently, the number of clusters is determined as the one associated with the model for which the proposed BIC is maximal. The performance of the proposed two-step algorithm is tested using synthetic and real data sets.Comment: 14 pages, 7 figure

    Detection of multiplicative noise in stationary random processes using second- and higher order statistics

    Get PDF
    This paper addresses the problem of detecting the presence of colored multiplicative noise, when the information process can be modeled as a parametric ARMA process. For the case of zero-mean multiplicative noise, a cumulant based suboptimal detector is studied. This detector tests the nullity of a specific cumulant slice. A second detector is developed when the multiplicative noise is nonzero mean. This detector consists of filtering the data by an estimated AR filter. Cumulants of the residual data are then shown to be well suited to the detection problem. Theoretical expressions for the asymptotic probability of detection are given. Simulation-derived finite-sample ROC curves are shown for different sets of model parameters

    Asymmetric Pruning for Learning Cascade Detectors

    Full text link
    Cascade classifiers are one of the most important contributions to real-time object detection. Nonetheless, there are many challenging problems arising in training cascade detectors. One common issue is that the node classifier is trained with a symmetric classifier. Having a low misclassification error rate does not guarantee an optimal node learning goal in cascade classifiers, i.e., an extremely high detection rate with a moderate false positive rate. In this work, we present a new approach to train an effective node classifier in a cascade detector. The algorithm is based on two key observations: 1) Redundant weak classifiers can be safely discarded; 2) The final detector should satisfy the asymmetric learning objective of the cascade architecture. To achieve this, we separate the classifier training into two steps: finding a pool of discriminative weak classifiers/features and training the final classifier by pruning weak classifiers which contribute little to the asymmetric learning criterion (asymmetric classifier construction). Our model reduction approach helps accelerate the learning time while achieving the pre-determined learning objective. Experimental results on both face and car data sets verify the effectiveness of the proposed algorithm. On the FDDB face data sets, our approach achieves the state-of-the-art performance, which demonstrates the advantage of our approach.Comment: 14 page

    A Game-Theoretic Framework for Optimum Decision Fusion in the Presence of Byzantines

    Full text link
    Optimum decision fusion in the presence of malicious nodes - often referred to as Byzantines - is hindered by the necessity of exactly knowing the statistical behavior of Byzantines. By focusing on a simple, yet widely studied, set-up in which a Fusion Center (FC) is asked to make a binary decision about a sequence of system states by relying on the possibly corrupted decisions provided by local nodes, we propose a game-theoretic framework which permits to exploit the superior performance provided by optimum decision fusion, while limiting the amount of a-priori knowledge required. We first derive the optimum decision strategy by assuming that the statistical behavior of the Byzantines is known. Then we relax such an assumption by casting the problem into a game-theoretic framework in which the FC tries to guess the behavior of the Byzantines, which, in turn, must fix their corruption strategy without knowing the guess made by the FC. We use numerical simulations to derive the equilibrium of the game, thus identifying the optimum behavior for both the FC and the Byzantines, and to evaluate the achievable performance at the equilibrium. We analyze several different setups, showing that in all cases the proposed solution permits to improve the accuracy of data fusion. We also show that, in some instances, it is preferable for the Byzantines to minimize the mutual information between the status of the observed system and the reports submitted to the FC, rather than always flipping the decision made by the local nodes as it is customarily assumed in previous works
    • …
    corecore