3,339 research outputs found
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey
Growing progress in sensor technology has constantly expanded the number and
range of low-cost, small, and portable sensors on the market, increasing the
number and type of physical phenomena that can be measured with wirelessly
connected sensors. Large-scale deployments of wireless sensor networks (WSN)
involving hundreds or thousands of devices and limited budgets often constrain
the choice of sensing hardware, which generally has reduced accuracy,
precision, and reliability. Therefore, it is challenging to achieve good data
quality and maintain error-free measurements during the whole system lifetime.
Self-calibration or recalibration in ad hoc sensor networks to preserve data
quality is essential, yet challenging, for several reasons, such as the
existence of random noise and the absence of suitable general models.
Calibration performed in the field, without accurate and controlled
instrumentation, is said to be in an uncontrolled environment. This paper
provides current and fundamental self-calibration approaches and models for
wireless sensor networks in uncontrolled environments
Pushing towards the Limit of Sampling Rate: Adaptive Chasing Sampling
Measurement samples are often taken in various monitoring applications. To
reduce the sensing cost, it is desirable to achieve better sensing quality
while using fewer samples. Compressive Sensing (CS) technique finds its role
when the signal to be sampled meets certain sparsity requirements. In this
paper we investigate the possibility and basic techniques that could further
reduce the number of samples involved in conventional CS theory by exploiting
learning-based non-uniform adaptive sampling.
Based on a typical signal sensing application, we illustrate and evaluate the
performance of two of our algorithms, Individual Chasing and Centroid Chasing,
for signals of different distribution features. Our proposed learning-based
adaptive sampling schemes complement existing efforts in CS fields and do not
depend on any specific signal reconstruction technique. Compared to
conventional sparse sampling methods, the simulation results demonstrate that
our algorithms allow less number of samples for accurate signal
reconstruction and achieve up to smaller signal reconstruction error
under the same noise condition.Comment: 9 pages, IEEE MASS 201
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing
application problems, but their excessive resource demand slows down execution
time, pausing a serious impediment to deployment on low-end devices. To address
this challenge, recent literature focused on compressing neural network size to
improve performance. We show that changing neural network size does not
proportionally affect performance attributes of interest, such as execution
time. Rather, extreme run-time nonlinearities exist over the network
configuration space. Hence, we propose a novel framework, called FastDeepIoT,
that uncovers the non-linear relation between neural network structure and
execution time, then exploits that understanding to find network configurations
that significantly improve the trade-off between execution time and accuracy on
mobile and embedded devices. FastDeepIoT makes two key contributions. First,
FastDeepIoT automatically learns an accurate and highly interpretable execution
time model for deep neural networks on the target device. This is done without
prior knowledge of either the hardware specifications or the detailed
implementation of the used deep learning library. Second, FastDeepIoT informs a
compression algorithm how to minimize execution time on the profiled device
without impacting accuracy. We evaluate FastDeepIoT using three different
sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.
FastDeepIoT further reduces the neural network execution time by to
and energy consumption by to compared with the
state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
Gossip Algorithms for Distributed Signal Processing
Gossip algorithms are attractive for in-network processing in sensor networks
because they do not require any specialized routing, there is no bottleneck or
single point of failure, and they are robust to unreliable wireless network
conditions. Recently, there has been a surge of activity in the computer
science, control, signal processing, and information theory communities,
developing faster and more robust gossip algorithms and deriving theoretical
performance guarantees. This article presents an overview of recent work in the
area. We describe convergence rate results, which are related to the number of
transmitted messages and thus the amount of energy consumed in the network for
gossiping. We discuss issues related to gossiping over wireless links,
including the effects of quantization and noise, and we illustrate the use of
gossip algorithms for canonical signal processing tasks including distributed
estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page
- …