34,409 research outputs found

    IETF standardization in the field of the Internet of Things (IoT): a survey

    Get PDF
    Smart embedded objects will become an important part of what is called the Internet of Things. However, the integration of embedded devices into the Internet introduces several challenges, since many of the existing Internet technologies and protocols were not designed for this class of devices. In the past few years, there have been many efforts to enable the extension of Internet technologies to constrained devices. Initially, this resulted in proprietary protocols and architectures. Later, the integration of constrained devices into the Internet was embraced by IETF, moving towards standardized IP-based protocols. In this paper, we will briefly review the history of integrating constrained devices into the Internet, followed by an extensive overview of IETF standardization work in the 6LoWPAN, ROLL and CoRE working groups. This is complemented with a broad overview of related research results that illustrate how this work can be extended or used to tackle other problems and with a discussion on open issues and challenges. As such the aim of this paper is twofold: apart from giving readers solid insights in IETF standardization work on the Internet of Things, it also aims to encourage readers to further explore the world of Internet-connected objects, pointing to future research opportunities

    Rate-distortion Balanced Data Compression for Wireless Sensor Networks

    Get PDF
    This paper presents a data compression algorithm with error bound guarantee for wireless sensor networks (WSNs) using compressing neural networks. The proposed algorithm minimizes data congestion and reduces energy consumption by exploring spatio-temporal correlations among data samples. The adaptive rate-distortion feature balances the compressed data size (data rate) with the required error bound guarantee (distortion level). This compression relieves the strain on energy and bandwidth resources while collecting WSN data within tolerable error margins, thereby increasing the scale of WSNs. The algorithm is evaluated using real-world datasets and compared with conventional methods for temporal and spatial data compression. The experimental validation reveals that the proposed algorithm outperforms several existing WSN data compression methods in terms of compression efficiency and signal reconstruction. Moreover, an energy analysis shows that compressing the data can reduce the energy expenditure, and hence expand the service lifespan by several folds.Comment: arXiv admin note: text overlap with arXiv:1408.294

    SimpleTrack:Adaptive Trajectory Compression with Deterministic Projection Matrix for Mobile Sensor Networks

    Full text link
    Some mobile sensor network applications require the sensor nodes to transfer their trajectories to a data sink. This paper proposes an adaptive trajectory (lossy) compression algorithm based on compressive sensing. The algorithm has two innovative elements. First, we propose a method to compute a deterministic projection matrix from a learnt dictionary. Second, we propose a method for the mobile nodes to adaptively predict the number of projections needed based on the speed of the mobile nodes. Extensive evaluation of the proposed algorithm using 6 datasets shows that our proposed algorithm can achieve sub-metre accuracy. In addition, our method of computing projection matrices outperforms two existing methods. Finally, comparison of our algorithm against a state-of-the-art trajectory compression algorithm show that our algorithm can reduce the error by 10-60 cm for the same compression ratio

    Efficient Data Compression with Error Bound Guarantee in Wireless Sensor Networks

    Get PDF
    We present a data compression and dimensionality reduction scheme for data fusion and aggregation applications to prevent data congestion and reduce energy consumption at network connecting points such as cluster heads and gateways. Our in-network approach can be easily tuned to analyze the data temporal or spatial correlation using an unsupervised neural network scheme, namely the autoencoders. In particular, our algorithm extracts intrinsic data features from previously collected historical samples to transform the raw data into a low dimensional representation. Moreover, the proposed framework provides an error bound guarantee mechanism. We evaluate the proposed solution using real-world data sets and compare it with traditional methods for temporal and spatial data compression. The experimental validation reveals that our approach outperforms several existing wireless sensor network's data compression methods in terms of compression efficiency and signal reconstruction.Comment: ACM MSWiM 201

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    A low-power opportunistic communication protocol for wearable applications

    Get PDF
    © 2015 IEEE.Recent trends in wearable applications demand flexible architectures being able to monitor people while they move in free-living environments. Current solutions use either store-download-offline processing or simple communication schemes with real-time streaming of sensor data. This limits the applicability of wearable applications to controlled environments (e.g, clinics, homes, or laboratories), because they need to maintain connectivity with the base station throughout the monitoring process. In this paper, we present the design and implementation of an opportunistic communication framework that simplifies the general use of wearable devices in free-living environments. It relies on a low-power data collection protocol that allows the end user to opportunistically, yet seamlessly manage the transmission of sensor data. We validate the feasibility of the framework by demonstrating its use for swimming, where the normal wireless communication is constantly interfered by the environment

    Rate-Distortion Classification for Self-Tuning IoT Networks

    Full text link
    Many future wireless sensor networks and the Internet of Things are expected to follow a software defined paradigm, where protocol parameters and behaviors will be dynamically tuned as a function of the signal statistics. New protocols will be then injected as a software as certain events occur. For instance, new data compressors could be (re)programmed on-the-fly as the monitored signal type or its statistical properties change. We consider a lossy compression scenario, where the application tolerates some distortion of the gathered signal in return for improved energy efficiency. To reap the full benefits of this paradigm, we discuss an automatic sensor profiling approach where the signal class, and in particular the corresponding rate-distortion curve, is automatically assessed using machine learning tools (namely, support vector machines and neural networks). We show that this curve can be reliably estimated on-the-fly through the computation of a small number (from ten to twenty) of statistical features on time windows of a few hundreds samples
    corecore