10 research outputs found

    Distributed Regression in Sensor Networks: Training Distributively with Alternating Projections

    Full text link
    Wireless sensor networks (WSNs) have attracted considerable attention in recent years and motivate a host of new challenges for distributed signal processing. The problem of distributed or decentralized estimation has often been considered in the context of parametric models. However, the success of parametric methods is limited by the appropriateness of the strong statistical assumptions made by the models. In this paper, a more flexible nonparametric model for distributed regression is considered that is applicable in a variety of WSN applications including field estimation. Here, starting with the standard regularized kernel least-squares estimator, a message-passing algorithm for distributed estimation in WSNs is derived. The algorithm can be viewed as an instantiation of the successive orthogonal projection (SOP) algorithm. Various practical aspects of the algorithm are discussed and several numerical simulations validate the potential of the approach.Comment: To appear in the Proceedings of the SPIE Conference on Advanced Signal Processing Algorithms, Architectures and Implementations XV, San Diego, CA, July 31 - August 4, 200

    A scheme for robust distributed sensor fusion based on average consensus

    Get PDF
    We consider a network of distributed sensors, where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected

    Decentralized Maximum Likelihood Estimation for Sensor Networks Composed of Nonlinearly Coupled Dynamical Systems

    Full text link
    In this paper we propose a decentralized sensor network scheme capable to reach a globally optimum maximum likelihood (ML) estimate through self-synchronization of nonlinearly coupled dynamical systems. Each node of the network is composed of a sensor and a first-order dynamical system initialized with the local measurements. Nearby nodes interact with each other exchanging their state value and the final estimate is associated to the state derivative of each dynamical system. We derive the conditions on the coupling mechanism guaranteeing that, if the network observes one common phenomenon, each node converges to the globally optimal ML estimate. We prove that the synchronized state is globally asymptotically stable if the coupling strength exceeds a given threshold. Acting on a single parameter, the coupling strength, we show how, in the case of nonlinear coupling, the network behavior can switch from a global consensus system to a spatial clustering system. Finally, we show the effect of the network topology on the scalability properties of the network and we validate our theoretical findings with simulation results.Comment: Journal paper accepted on IEEE Transactions on Signal Processin

    Target Tracking in Wireless Sensor Networks

    Get PDF

    Robust Distributed Estimation in Sensor Networks using the Embedded Polygons Algorithm

    No full text
    Conference PaperWe propose a new iterative distributed algorithm for linear minimum mean-squared-error (LMMSE) estimation in sensor networks whose measurements follow a Gaussian hidden Markov graphical model with cycles. The <i>embedded polygons algorithm</i> decomposes a loopy graphical model into a number of linked embedded polygons and then applies a parallel block Gauss-Seidel iteration comprising local LMMSE estimation on each polygon (involving inversion of a small matrix) followed by an information exchange between neighboring nodes and polygons. The algorithm is robust to temporary communication faults such as link failures and sleeping nodes and enjoys guaranteed convergence under mild conditions. A simulation study indicates that energy consumption for iterative estimation increases substantially as more links fail or nodes sleep. Thus, somewhat surprisingly, energy conservation strategies such as low-powered transmission and aggressive sleep schedules could actually be counterproductive

    Detección y estimación en redes de sensores inalámbricas con centro de fusión

    Get PDF
    El presente proyecto trata de analizar las redes de sensores inalámbricas y la posible detección y estimación descentralizada a partir de la información obtenida por los sensores de dichas redes. Para ello se ha considerado un escenario en el cual los sensores están distribuidos de modo uniforme por toda el área monitorizada, en la que se pretende detectar la presencia de un blanco móvil. Concretamente, se ha implementado en Matlab, mediante el uso de una máquina de vectores soporte (SVM), un detector global de presencia que coincide con el centro de fusión de datos de la red. La regla de decisión global de este centro se genera en base a las decisiones locales tomadas por cada nodo-sensor. La necesidad de tomar las decisiones con carácter local, y de este modo obligar a la inferencia descentralizada, tiene como origen las limitaciones que se imponen a la capacidad del canal de comunicaciones entre los sensores y el centro de fusión, tratando de reducir al mínimo (hasta 1 bit) la información que cada nodo puede enviar de forma paralela al mencionado centro. Por ello, cada sensor dispone de una regla de decisión local establecida bajo el criterio de Neyman–Pearson, que nos garantiza unos niveles programables de probabilidad de error a partir de los cuales se obtienen umbrales de potencia con los que comparar la potencia recibida en un entorno con ruido Gaussiano. Posteriormente se modifica el escenario para dar la oportunidad a cada sensor de abstenerse de enviar información si considera que su información es confusa, aumentando indirectamente la información que recibe el centro de fusión. Una vez obtenidos y evaluados los detectores globales, el proyecto se completa con una primera aproximación a la estimación, tratando de comprobar en qué medida es posible estimar la posición y recrear la trayectoria una vez detectado el blanco. __________________________________________________________________________________________________________________The main goal of this project is analyzing Wireless Sensor Networks and how decentralized detection and estimation can be performed in them thanks to the sensed information. In order to achieve this goal, an environment where sensors are uniformly distributed inside the supervised area has been considered. Detecting a potential mobile target crossing mentioned area is the ultimate purpose in this scenario. In particular a global detector which corresponds to the data fusion center has been implemented in Matlab using Support Vectors Machines (SVMs). In this way a global decision rule is generated based on local decisions made by each sensor-node. Due to the established limits on the communications channel capacity between the sensors and the fusion center, local decisions are required, as well as decentralized inference. The scheme is designed to try to reduce it as much as possible (up to 1 bit) information sent in a parallel way from each node to the mentioned fusion center. Hence, each single node performs its decision according to a local detection rule obtained under Neyman-Pearson criterion. This criterion guarantees fixed probability of error levels from which power thresholds are derived and used to be compared with the received power in a Gaussian noise environment. Later on, this scenario is modified to provide sensors the opportunity of refraining from transmitting information if they consider that their local information is confusing. This means that amount of information received by the fusion center increases indirectly. Once global detectors have been obtained and evaluated, the project is completed with a first approach to estimation. The purpose now is taking advantage of previous work to check whether it´s possible or not to estimate the target´s position and reconstruct the followed path when the target has already been detected.Ingeniería de Telecomunicació

    Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 241-257).Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.by Jason K. Johnson.Ph.D
    corecore