310 research outputs found
Rate-Distortion Classification for Self-Tuning IoT Networks
Many future wireless sensor networks and the Internet of Things are expected
to follow a software defined paradigm, where protocol parameters and behaviors
will be dynamically tuned as a function of the signal statistics. New protocols
will be then injected as a software as certain events occur. For instance, new
data compressors could be (re)programmed on-the-fly as the monitored signal
type or its statistical properties change. We consider a lossy compression
scenario, where the application tolerates some distortion of the gathered
signal in return for improved energy efficiency. To reap the full benefits of
this paradigm, we discuss an automatic sensor profiling approach where the
signal class, and in particular the corresponding rate-distortion curve, is
automatically assessed using machine learning tools (namely, support vector
machines and neural networks). We show that this curve can be reliably
estimated on-the-fly through the computation of a small number (from ten to
twenty) of statistical features on time windows of a few hundreds samples
Rate-distortion Balanced Data Compression for Wireless Sensor Networks
This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.Comment: arXiv admin note: text overlap with arXiv:1408.294
Efficient Data Compression with Error Bound Guarantee in Wireless Sensor Networks
We present a data compression and dimensionality reduction scheme for data
fusion and aggregation applications to prevent data congestion and reduce
energy consumption at network connecting points such as cluster heads and
gateways. Our in-network approach can be easily tuned to analyze the data
temporal or spatial correlation using an unsupervised neural network scheme,
namely the autoencoders. In particular, our algorithm extracts intrinsic data
features from previously collected historical samples to transform the raw data
into a low dimensional representation. Moreover, the proposed framework
provides an error bound guarantee mechanism. We evaluate the proposed solution
using real-world data sets and compare it with traditional methods for temporal
and spatial data compression. The experimental validation reveals that our
approach outperforms several existing wireless sensor network's data
compression methods in terms of compression efficiency and signal
reconstruction.Comment: ACM MSWiM 201
EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design
The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application
Enabling Compression in Tiny Wireless Sensor Nodes
A Wireless Sensor Network (WSN) is a network composed of sensor nodes communicating among themselves and deployed in large scale (from tens to thousands) for applications such as environmental, habitat and structural monitoring, disaster management, equipment diagnostic, alarm detection, and target classification. In WSNs, typically, sensor nodes are randomly distributed over the area under observation with very high density. Each node is a small device able to collect information from the surrounding environment through one or more sensors, to elaborate this information locally and to communicate it to a data collection centre called sink or base station. WSNs are currently an active research area mainly due to the potential of their applications. However, the deployment of a large scale WSN still requires solutions to a number of technical challenges that stem primarily from the features of the sensor nodes such as limited computational power, reduced communication bandwidth and small storage capacity. Further, since sensor nodes are typically powered by batteries with a limited capacity, energy is a primary constraint in the design and deployment of WSNs. Datasheets of commercial sensor nodes show that data communication is very expensive in terms of energy consumption, whereas data processing consumes significantly less: the energy cost of receiving or transmitting a single bit of information is approximately the same as that required by the processing unit for executing a thousand operations. On the other hand, the energy consumption of the sensing unit depends on the specific sensor type. In several cases, however, it is negligible with respect to the energy consumed by the communication unit and sometimes also by the processing unit. Thus, to extend the lifetime of a WSN, most of the energy conservation schemes proposed in the literature aim to minimize the energy consumption of the communication unit (Croce et al., 2008). To achieve this objective, two main approaches have been followed: power saving through duty cycling and in-network processing. Duty cycling schemes define coordinated sleep/wakeup schedules among nodes in the network. A detailed description of these techniques applied to WSNs can be found in (Anastasi et al., 2009). On the other hand, in-network processing consists in reducing the amount of information to be transmitted by means of aggregation (Boulis et al., 2003) (Croce et al., 2008) (Di Bacco et al., 2004) (Fan et al., 2007)
Evaluation of Tunable Data Compression in Energy-Aware Wireless Sensor Networks
Energy is an important consideration in wireless sensor networks. In the current compression evaluations, traditional indices are still used, while energy efficiency is probably neglected. Moreover, various evaluation biases significantly affect the final results. All these factors lead to a subjective evaluation. In this paper, a new criterion is proposed and a series of tunable compression algorithms are reevaluated. The results show that the new criterion makes the evaluation more objective. Additionally it indicates the situations when compression is unnecessary. A new adaptive compression arbitration system is proposed based on the evaluation results, which improves the performance of compression algorithms
Multi-dimensional data stream compression for embedded systems
The rise of embedded systems and wireless technologies led to the emergence of
the Internet of Things (IoT). Connected objects in IoT communicate with each
other by transferring data streams over the network. For instance, in Wireless
Sensor Networks (WSNs), sensor-equipped devices use sensors to capture
properties, such as temperature or accelerometer, and send 1D or nD data streams
to a host system. Power consumption is a critical problem for connected objects
that have to work for a long time without being recharged, as it greatly affects
their lifetime and usability. Data summarization is key for energy-constrained
connected devices, as transmitting fewer data can reduce energy usage during
transmission. Data compression, in particular, can compress the data stream
while preserving information to a great extent. Many compression methods have
been proposed in previous research. However, most of them are either not
applicable to connected objects, due to resource limitation, or only handle
one-dimensional streams while data acquired in connected objects are often
multi-dimensional. Lightweight Temporal Compression (LTC) is among the lossy
stream compression methods that provide the highest compression rate for the
lowest CPU and memory consumption. In this thesis, we investigate the extension
of LTC to multi-dimensional streams. First, we provide a formulation of the
algorithm in an arbitrary vectorial space of dimension n. Then, we implement the
algorithm for the infinity and Euclidean norms, in spaces of dimension 2D+t and
3D+t. We evaluate our implementation on 3D acceleration streams of human
activities, on Neblina, a module integrating multiple sensors developed by our
partner Motsai. Results show that the 3D implementation of LTC can save up to
20% in energy consumption for slow-paced activities, with a memory usage of
about 100 B. Finally, we compare our method with polynomial regression
compression methods in different dimensions. Our results show that our extension
of LTC gives a higher compression ratio than the polynomial regression method,
while using less memory and CPU
- …