2,377 research outputs found
An objective based classification of aggregation techniques for wireless sensor networks
Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented
Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling
Identifying a coupled dynamical system out of many plausible candidates, each
of which could serve as the underlying generator of some observed measurements,
is a profoundly ill posed problem that commonly arises when modelling real
world phenomena. In this review, we detail a set of statistical procedures for
inferring the structure of nonlinear coupled dynamical systems (structure
learning), which has proved useful in neuroscience research. A key focus here
is the comparison of competing models of (ie, hypotheses about) network
architectures and implicit coupling functions in terms of their Bayesian model
evidence. These methods are collectively referred to as dynamical casual
modelling (DCM). We focus on a relatively new approach that is proving
remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid
evaluation and comparison of models that differ in their network architecture.
We illustrate the usefulness of these techniques through modelling
neurovascular coupling (cellular pathways linking neuronal and vascular
systems), whose function is an active focus of research in neurobiology and the
imaging of coupled neuronal systems
Adaptive Sampling with Mobile Sensor Networks
Mobile sensor networks have unique advantages compared with wireless sensor networks. The mobility enables mobile sensors to flexibly reconfigure themselves to meet sensing requirements. In this dissertation, an adaptive sampling method for mobile sensor networks is presented. Based on the consideration of sensing resource constraints, computing abilities, and onboard energy limitations, the adaptive sampling method follows a down sampling scheme, which could reduce the total number of measurements, and lower sampling cost. Compressive sensing is a recently developed down sampling method, using a small number of randomly distributed measurements for signal reconstruction. However, original signals cannot be reconstructed using condensed measurements, as addressed by Shannon Sampling Theory. Measurements have to be processed under a sparse domain, and convex optimization methods should be applied to reconstruct original signals. Restricted isometry property would guarantee signals can be recovered with little information loss. While compressive sensing could effectively lower sampling cost, signal reconstruction is still a great research challenge. Compressive sensing always collects random measurements, whose information amount cannot be determined in prior. If each measurement is optimized as the most informative measurement, the reconstruction performance can perform much better.
Based on the above consideration, this dissertation is focusing on an adaptive sampling approach, which could find the most informative measurements in unknown environments and reconstruct original signals. With mobile sensors, measurements are collect sequentially, giving the chance to uniquely optimize each of them. When mobile sensors are about to collect a new measurement from the surrounding environments, existing information is shared among networked sensors so that each sensor would have a global view of the entire environment. Shared information is analyzed under Haar Wavelet domain, under which most nature signals appear sparse, to infer a model of the environments. The most informative measurements can be determined by optimizing model parameters. As a result, all the measurements collected by the mobile sensor network are the most informative measurements given existing information, and a perfect reconstruction would be expected.
To present the adaptive sampling method, a series of research issues will be addressed, including measurement evaluation and collection, mobile network establishment, data fusion, sensor motion, signal reconstruction, etc. Two dimensional scalar field will be reconstructed using the method proposed. Both single mobile sensors and mobile sensor networks will be deployed in the environment, and reconstruction performance of both will be compared.In addition, a particular mobile sensor, a quadrotor UAV is developed, so that the adaptive sampling method can be used in three dimensional scenarios
Compressive Visual Question Answering
abstract: Compressive sensing theory allows to sense and reconstruct signals/images with lower sampling rate than Nyquist rate. Applications in resource constrained environment stand to benefit from this theory, opening up many possibilities for new applications at the same time. The traditional inference pipeline for computer vision sequence reconstructing the image from compressive measurements. However,the reconstruction process is a computationally expensive step that also provides poor results at high compression rate. There have been several successful attempts to perform inference tasks directly on compressive measurements such as activity recognition. In this thesis, I am interested to tackle a more challenging vision problem - Visual question answering (VQA) without reconstructing the compressive images. I investigate the feasibility of this problem with a series of experiments, and I evaluate proposed methods on a VQA dataset and discuss promising results and direction for future work.Dissertation/ThesisMasters Thesis Computer Engineering 201
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
Distributed Inference and Learning with Byzantine Data
We are living in an increasingly networked world with sensing networks of varying shapes and sizes: the network often comprises of several tiny devices (or nodes) communicating with each other via different topologies. To make the problem even more complicated, the nodes in the network can be unreliable due to a variety of reasons: noise, faults and attacks, thus, providing
corrupted data. Although the area of statistical inference has been an active area of research in the
past, distributed learning and inference in a networked setup with potentially unreliable components
has only gained attention recently. The emergence of big and dirty data era demands new
distributed learning and inference solutions to tackle the problem of inference with corrupted data.
Distributed inference networks (DINs) consist of a group of networked entities which acquire
observations regarding a phenomenon of interest (POI), collaborate with other entities in the network
by sharing their inference via different topologies to make a global inference. The central
goal of this thesis is to analyze the effect of corrupted (or falsified) data on the inference performance
of DINs and design robust strategies to ensure reliable overall performance for several
practical network architectures. Specifically, the inference (or learning) process can be that of detection
or estimation or classification, and the topology of the system can be parallel, hierarchical
or fully decentralized (peer to peer).
Note that, the corrupted data model may seem similar to the scenario where local decisions
are transmitted over a Binary Symmetric Channel (BSC) with a certain cross over probability,
however, there are fundamental differences. Over the last three decades, research community
has extensively studied the impact of transmission channels or faults on the distributed detection
system and related problems due to its importance in several applications. However, corrupted
(Byzantine) data models considered in this thesis, are philosophically different from the BSC or
the faulty sensor cases. Byzantines are intentional and intelligent, therefore, they can optimize
over the data corruption parameters. Thus, in contrast to channel aware detection, both the FC and
the Byzantines can optimize their utility by choosing their actions based on the knowledge of their
opponent’s behavior. Study of these practically motivated scenarios in the presence of Byzantines
is of utmost importance, and is missing from the channel aware detection and fault tolerant detection
literature. This thesis advances the distributed inference literature by providing fundamental
limits of distributed inference with Byzantine data and provides optimal counter-measures (using
the insights provided by these fundamental limits) from a network designer’s perspective. Note
that, the analysis of problems related to strategical interaction between Byzantines and network
designed is very challenging (NP-hard is many cases). However, we show that by utilizing the
properties of the network architecture, efficient solutions can be obtained. Specifically, we found
that several problems related to the design of optimal counter-measures in the inference context
are, in fact, special cases of these NP-hard problems which can be solved in polynomial time.
First, we consider the problem of distributed Bayesian detection in the presence of data falsification
(or Byzantine) attacks in the parallel topology. Byzantines considered in this thesis are those
nodes that are compromised and reprogrammed by an adversary to transmit false information to
a centralized fusion center (FC) to degrade detection performance. We show that above a certain
fraction of Byzantine attackers in the network, the detection scheme becomes completely incapable
(or blind) of utilizing the sensor data for detection. When the fraction of Byzantines is not
sufficient to blind the FC, we also provide closed form expressions for the optimal attacking strategies
for the Byzantines that most degrade the detection performance. Optimal attacking strategies
in certain cases have the minimax property and, therefore, the knowledge of these strategies has
practical significance and can be used to implement a robust detector at the FC.
In several practical situations, parallel topology cannot be implemented due to limiting factors,
such as, the FC being outside the communication range of the nodes and limited energy budget of
the nodes. In such scenarios, a multi-hop network is employed, where nodes are organized hierarchically
into multiple levels (tree networks). Next, we study the problem of distributed inference
in tree topologies in the presence of Byzantines under several practical scenarios. We analytically
characterize the effect of Byzantines on the inference performance of the system. We also look at
the possible counter-measures from the FC’s perspective to protect the network from these Byzantines.
These counter-measures are of two kinds: Byzantine identification schemes and Byzantine
tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed
that learn the identity of Byzantines in the network and use this information to improve system
performance. For scenarios where this is not possible, Byzantine tolerant schemes, which use
game theory and error-correcting codes, are developed that tolerate the effect of Byzantines while
maintaining a reasonably good inference performance in the network.
Going a step further, we also consider scenarios where a centralized FC is not available. In
such scenarios, a solution is to employ detection approaches which are based on fully distributed
consensus algorithms, where all of the nodes exchange information only with their neighbors. For
such networks, we analytically characterize the negative effect of Byzantines on the steady-state
and transient detection performance of conventional consensus-based detection schemes. To avoid
performance deterioration, we propose a distributed weighted average consensus algorithm that is
robust to Byzantine attacks. Next, we exploit the statistical distribution of the nodes’ data to devise
techniques for mitigating the influence of data falsifying Byzantines on the distributed detection
system. Since some parameters of the statistical distribution of the nodes’ data might not be known
a priori, we propose learning based techniques to enable an adaptive design of the local fusion or
update rules.
The above considerations highlight the negative effect of the corrupted data on the inference
performance. However, it is possible for a system designer to utilize the corrupted data for network’s
benefit. Finally, we consider the problem of detecting a high dimensional signal based on
compressed measurements with secrecy guarantees. We consider a scenario where the network
operates in the presence of an eavesdropper who wants to discover the state of the nature being
monitored by the system. To keep the data secret from the eavesdropper, we propose to use cooperating
trustworthy nodes that assist the FC by injecting corrupted data in the system to deceive the
eavesdropper. We also design the system by determining the optimal values of parameters which
maximize the detection performance at the FC while ensuring perfect secrecy at the eavesdropper
- …