3,779 research outputs found
Data aggregation and recovery for the Internet of Things: A compressive demixing approach
Large-scale wireless sensor networks (WSNs) and Internet-of-Things (IoT) applications involve diverse sensing devices collecting and transmitting massive amounts of heterogeneous data. In this paper, we propose a novel compressive data aggregation and recovery mechanism that reduces the global communication cost without introducing computational overhead at the network nodes. Following the principles of compressive demixing, each node of the network collects measurement readings from multiple sources and mixes them with readings from other nodes into a single low-dimensional measurement vector, which is then relayed to other nodes; the constituent signals are recovered at the sink using convex optimization. Our design achieves significant reduction in the overall network data rates compared to prior schemes based on (distributed) compressed sensing or compressed sensing with (multiple) side information. Experiments using real large-scale air-quality data demonstrate the superior performance of the proposed framework against state-of-the-art solutions, with and without the presence of measurement and transmission noise
Rate-distortion Balanced Data Compression for Wireless Sensor Networks
This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.Comment: arXiv admin note: text overlap with arXiv:1408.294
Rate-Distortion Classification for Self-Tuning IoT Networks
Many future wireless sensor networks and the Internet of Things are expected
to follow a software defined paradigm, where protocol parameters and behaviors
will be dynamically tuned as a function of the signal statistics. New protocols
will be then injected as a software as certain events occur. For instance, new
data compressors could be (re)programmed on-the-fly as the monitored signal
type or its statistical properties change. We consider a lossy compression
scenario, where the application tolerates some distortion of the gathered
signal in return for improved energy efficiency. To reap the full benefits of
this paradigm, we discuss an automatic sensor profiling approach where the
signal class, and in particular the corresponding rate-distortion curve, is
automatically assessed using machine learning tools (namely, support vector
machines and neural networks). We show that this curve can be reliably
estimated on-the-fly through the computation of a small number (from ten to
twenty) of statistical features on time windows of a few hundreds samples
Optimal Compression and Transmission Rate Control for Node-Lifetime Maximization
We consider a system that is composed of an energy constrained sensor node
and a sink node, and devise optimal data compression and transmission policies
with an objective to prolong the lifetime of the sensor node. While applying
compression before transmission reduces the energy consumption of transmitting
the sensed data, blindly applying too much compression may even exceed the cost
of transmitting raw data, thereby losing its purpose. Hence, it is important to
investigate the trade-off between data compression and transmission energy
costs. In this paper, we study the joint optimal compression-transmission
design in three scenarios which differ in terms of the available channel
information at the sensor node, and cover a wide range of practical situations.
We formulate and solve joint optimization problems aiming to maximize the
lifetime of the sensor node whilst satisfying specific delay and bit error rate
(BER) constraints. Our results show that a jointly optimized
compression-transmission policy achieves significantly longer lifetime (90% to
2000%) as compared to optimizing transmission only without compression.
Importantly, this performance advantage is most profound when the delay
constraint is stringent, which demonstrates its suitability for low latency
communication in future wireless networks.Comment: accepted for publication in IEEE Transactions on Wireless
Communicaiton
Middleware Technologies for Cloud of Things - a survey
The next wave of communication and applications rely on the new services
provided by Internet of Things which is becoming an important aspect in human
and machines future. The IoT services are a key solution for providing smart
environments in homes, buildings and cities. In the era of a massive number of
connected things and objects with a high grow rate, several challenges have
been raised such as management, aggregation and storage for big produced data.
In order to tackle some of these issues, cloud computing emerged to IoT as
Cloud of Things (CoT) which provides virtually unlimited cloud services to
enhance the large scale IoT platforms. There are several factors to be
considered in design and implementation of a CoT platform. One of the most
important and challenging problems is the heterogeneity of different objects.
This problem can be addressed by deploying suitable "Middleware". Middleware
sits between things and applications that make a reliable platform for
communication among things with different interfaces, operating systems, and
architectures. The main aim of this paper is to study the middleware
technologies for CoT. Toward this end, we first present the main features and
characteristics of middlewares. Next we study different architecture styles and
service domains. Then we presents several middlewares that are suitable for CoT
based platforms and lastly a list of current challenges and issues in design of
CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268,
Digital Communications and Networks, Elsevier (2017
- …