2,856 research outputs found
An objective based classification of aggregation techniques for wireless sensor networks
Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented
Fault-Tolerant Aggregation: Flow-Updating Meets Mass-Distribution
Flow-Updating (FU) is a fault-tolerant technique that has proved to be
efficient in practice for the distributed computation of aggregate functions in
communication networks where individual processors do not have access to global
information. Previous distributed aggregation protocols, based on repeated
sharing of input values (or mass) among processors, sometimes called
Mass-Distribution (MD) protocols, are not resilient to communication failures
(or message loss) because such failures yield a loss of mass. In this paper, we
present a protocol which we call Mass-Distribution with Flow-Updating (MDFU).
We obtain MDFU by applying FU techniques to classic MD. We analyze the
convergence time of MDFU showing that stochastic message loss produces low
overhead. This is the first convergence proof of an FU-based algorithm. We
evaluate MDFU experimentally, comparing it with previous MD and FU protocols,
and verifying the behavior predicted by the analysis. Finally, given that MDFU
incurs a fixed deviation proportional to the message-loss rate, we adjust the
accuracy of MDFU heuristically in a new protocol called MDFU with Linear
Prediction (MDFU-LP). The evaluation shows that both MDFU and MDFU-LP behave
very well in practice, even under high rates of message loss and even changing
the input values dynamically.Comment: 18 pages, 5 figures, To appear in OPODIS 201
A New Approach to Linear/Nonlinear Distributed Fusion Estimation Problem
Disturbance noises are always bounded in a practical system, while fusion
estimation is to best utilize multiple sensor data containing noises for the
purpose of estimating a quantity--a parameter or process. However, few results
are focused on the information fusion estimation problem under bounded noises.
In this paper, we study the distributed fusion estimation problem for linear
time-varying systems and nonlinear systems with bounded noises, where the
addressed noises do not provide any statistical information, and are unknown
but bounded. When considering linear time-varying fusion systems with bounded
noises, a new local Kalman-like estimator is designed such that the square
error of the estimator is bounded as time goes to . A novel
constructive method is proposed to find an upper bound of fusion estimation
error, then a convex optimization problem on the design of an optimal weighting
fusion criterion is established in terms of linear matrix inequalities, which
can be solved by standard software packages. Furthermore, according to the
design method of linear time-varying fusion systems, each local nonlinear
estimator is derived for nonlinear systems with bounded noises by using Taylor
series expansion, and a corresponding distributed fusion criterion is obtained
by solving a convex optimization problem. Finally, target tracking system and
localization of a mobile robot are given to show the advantages and
effectiveness of the proposed methods.Comment: 9 pages, 3 figure
Optimal Distributed Fault-Tolerant Sensor Fusion: Fundamental Limits and Efficient Algorithms
Distributed estimation is a fundamental problem in signal processing which
finds applications in a variety of scenarios of interest including distributed
sensor networks, robotics, group decision problems, and monitoring and
surveillance applications. The problem considers a scenario where distributed
agents are given a set of measurements, and are tasked with estimating a target
variable. This work considers distributed estimation in the context of sensor
networks, where a subset of sensor measurements are faulty and the distributed
agents are agnostic to these faulty sensor measurements. The objective is to
minimize i) the mean square error in estimating the target variable at each
node (accuracy objective), and ii) the mean square distance between the
estimates at each pair of nodes (consensus objective). It is shown that there
is an inherent tradeoff between satisfying the former and latter objectives.
The tradeoff is explicitly characterized and the fundamental performance limits
are derived under specific statistical assumptions on the sensor output
statistics. Assuming a general stochastic model, the sensor fusion algorithm
optimizing this tradeoff is characterized through a computable optimization
problem. Finding the optimal sensor fusion algorithm is computationally
complex. To address this, a general class of low-complexity Brooks-Iyengar
Algorithms are introduced, and their performance, in terms of accuracy and
consensus objectives, is compared to that of optimal linear estimators through
case study simulations of various scenarios
The effect of forgetting on the performance of a synchronizer
AbstractWe study variants of the α-synchronizer by Awerbuch (1985) within a distributed message passing system with probabilistic message loss. The purpose of a synchronizer is to maintain a virtual (lock-step) round structure, which simplifies the design of higher-level distributed algorithms. The underlying idea of an α-synchronizer is to let processes continuously exchange round numbers and to allow a process to proceed to the next round only after it has witnessed that all processes have already started the current round.In this work, we study the performance of several synchronizers in an environment with probabilistic message loss. In particular, we analyze how different strategies of forgetting affect the round durations. The synchronizer variants considered differ in the times when processes discard part of their accumulated knowledge during the execution. Possible applications can be found, e.g., in sensor fusion, where sensor data become outdated and thus invalid after a certain amount of time.For all synchronizer variants considered, we develop corresponding Markov chain models and quantify the performance degradation using both analytic approaches and Monte-Carlo simulations. Our results allow to explicitly calculate the asymptotic behavior of the round durations: While in systems with very reliable communication the effect of forgetting is negligible, the effect is more profound in systems with less reliable communication. Our study thus provides computationally efficient bounds on the performance of the (non-forgetting) α-synchronizer and allows to quantitatively assess the effect accumulated knowledge has on the performance
Collaborative Solutions to Visual Sensor Networks
Visual sensor networks (VSNs) merge computer vision, image processing and wireless sensor network disciplines to solve problems in multi-camera applications in large surveillance areas. Although potentially powerful, VSNs also present unique challenges that could hinder their practical deployment because of the unique camera features including the extremely higher data rate, the directional sensing characteristics, and the existence of visual occlusions.
In this dissertation, we first present a collaborative approach for target localization in VSNs. Traditionally; the problem is solved by localizing targets at the intersections of the back-projected 2D cones of each target. However, the existence of visual occlusions among targets would generate many false alarms. Instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in 2D cones and generate the so-called certainty map of targets non-existence. We also propose distributed integration of local certainty maps by following a dynamic itinerary where the entire map is progressively clarified.
The accuracy of target localization is affected by the existence of faulty nodes in VSNs. Therefore, we present the design of a fault-tolerant localization algorithm that would not only accurately localize targets but also detect the faults in camera orientations, tolerate these errors and further correct them before they cascade. Based on the locations of detected targets in the fault-tolerated final certainty map, we construct a generative image model that estimates the camera orientations, detect inaccuracies and correct them.
In order to ensure the required visual coverage to accurately localize targets or tolerate the faulty nodes, we need to calculate the coverage before deploying sensors. Therefore, we derive the closed-form solution for the coverage estimation based on the certainty-based detection model that takes directional sensing of cameras and existence of visual occlusions into account.
The effectiveness of the proposed collaborative and fault-tolerant target localization algorithms in localization accuracy as well as fault detection and correction performance has been validated through the results obtained from both simulation and real experiments. In addition, conducted simulation shows extreme consistency with results from theoretical closed-form solution for visual coverage estimation, especially when considering the boundary effect
Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey
Growing progress in sensor technology has constantly expanded the number and
range of low-cost, small, and portable sensors on the market, increasing the
number and type of physical phenomena that can be measured with wirelessly
connected sensors. Large-scale deployments of wireless sensor networks (WSN)
involving hundreds or thousands of devices and limited budgets often constrain
the choice of sensing hardware, which generally has reduced accuracy,
precision, and reliability. Therefore, it is challenging to achieve good data
quality and maintain error-free measurements during the whole system lifetime.
Self-calibration or recalibration in ad hoc sensor networks to preserve data
quality is essential, yet challenging, for several reasons, such as the
existence of random noise and the absence of suitable general models.
Calibration performed in the field, without accurate and controlled
instrumentation, is said to be in an uncontrolled environment. This paper
provides current and fundamental self-calibration approaches and models for
wireless sensor networks in uncontrolled environments
Resilient Multidimensional Sensor Fusion Using Measurement History
This work considers the problem of performing resilient sensor fusion using past sensor measurements. In particular, we consider a system with n sensors measuring the same physical variable where some sensors might be attacked or faulty. We consider a setup in which each sensor provides the controller with a set of possible values for the true value. Here, more precise sensors provide smaller sets. Since a lot of modern sensors provide multidimensional measurements (e.g., position in three dimensions), the sets considered in this work are multidimensional polyhedra.
Given the assumption that some sensors can be attacked or faulty, the paper provides a sensor fusion algorithm that obtains a fusion polyhedron which is guaranteed to contain the true value and is minimal in size. A bound on the volume of the fusion polyhedron is also proved based on the number of faulty or attacked sensors. In addition, we incorporate system dynamics in order to utilize past measurements and further reduce the size of the fusion polyhedron. We describe several ways of mapping previous measurements to current time and compare them, under di erent assumptions, using the volume of the fusion polyhedron. Finally, we illustrate the implementation of the best of these methods and show its e ectiveness using a case study with sensor values from a real robot
- …