26 research outputs found

    Constructive Interference in 802.15.4: A Tutorial

    Get PDF
    International audienceConstructive Interference (CI) can happen when multiple wireless devices send the same frame at the same time. If the time offset between the transmissions is less than 500 ns, a receiver will successfully decode the frame with high probability. CI can be useful for achieving low-latency communication or low-overhead flooding in a multi-hop low-power wireless network. The contribution of this article is threefold. First, we present the current state-of-the-art CI-based protocols. Second, we provide a detailed hands-on tutorial on how to implement CI-based protocols on TelosB motes, with well documented open-source code. Third, we discuss the issues and challenges of CI-based protocols, and list open issues and research directions. This article is targeted at the level of practicing engineers and advanced researchers and can serve both as a primer on CI technology and a reference to its implementation

    Reduced overhead routing in short-range low-power and lossy wireless networks

    Get PDF
    In this paper we present enhanced routing protocol for low-lower and lossy networks (ERPL), a reduced overhead routing protocol for short-range low-power and lossy wireless networks, based on RPL. ERPL enhances peer-to-peer (P2P) route construction and data packet forwarding in RPL’s storing and non-storing modes of operation (MoPs). In order to minimize source routing overhead, it encodes routing paths in Bloom Filters (BF). The salient features of ERPL include the following: (i) optimized P2P routing and data forwarding; (ii) no additional control messages; and (iii) minimized source routing overhead. We extensively evaluated ERPL against RPL using emulation, simulation, and physical test-bed based experiments. Our results demonstrate that ERPL outperforms standard RPL in P2P communication and its optimized P2P route construction and data forwarding algorithms also positively impact the protocol’s performance in multi-point to point (MP2P) and point to multi-point (P2MP) communications. Our results demonstrate that the BF-based approach towards compressed source routing information is feasible for the kinds of networks considered in this paper. The BF-based approach results in 65% lower source routing control overhead compared to RPL. Our results also provide new insights into the performance of MP2P, P2MP, and P2P communications relative to RPL’s destination-oriented directed a-cyclic graph (DODAG) depth, i.e., a deeper DODAG negatively impacts the performance of MP2P and P2MP communications, however it positively impacts P2P communication, while the reverse holds true for a relatively shallow DODAG

    The Contiki-NG open source operating system for next generation IoT devices

    Get PDF
    Contiki-NG (Next Generation) is an open source, cross-platform operating system for severely constrained wireless embedded devices. It focuses on dependable (reliable and secure) low-power communications and standardised protocols, such as 6LoWPAN, IPv6, 6TiSCH, RPL, and CoAP. Its primary aims are to (i) facilitate rapid prototyping and evaluation of Internet of Things research ideas, (ii) reduce time-to-market for Internet of Things applications, and (iii) provide an easy-to-use platform for teaching embedded systems-related courses in higher education. Contiki-NG started as a fork of the Contiki OS and retains many of its original features. In this paper, we discuss the motivation behind the creation of Contiki-NG, present the most recent version (v4.7), and highlight the impact of Contiki-NG through specific examples

    Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition?

    Get PDF
    The last decade has seen exponential growth in the field of deep learning with deep learning on microcontrollers a new frontier for this research area. This paper presents a case study about machine learning on microcontrollers, with a focus on human activity recognition using accelerometer data. We build machine learning classifiers suitable for execution on modern microcontrollers and evaluate their performance. Specifically, we compare Random Forests (RF), a classical machine learning technique, with Convolutional Neural Networks (CNN), in terms of classification accuracy and inference speed. The results show that RF classifiers achieve similar levels of classification accuracy while being several times faster than a small custom CNN model designed for the task. The RF and the custom CNN are also several orders of magnitude faster than state-of-the-art deep learning models. On the one hand, these findings confirm the feasibility of using deep learning on modern microcontrollers. On the other hand, they cast doubt on whether deep learning is the best approach for this application, especially if high inference speed and, thus, low energy consumption is the key objective

    Crocs: Cross-Technology Clock Synchronization for WiFi and ZigBee

    Full text link
    Clock synchronization is a key function in embedded wireless systems and networks. This issue is equally important and more challenging in IoT systems nowadays, which often include heterogeneous wireless devices that follow different wireless standards. Conventional solutions to this problem employ gateway-based indirect synchronization, which suffers low accuracy. This paper for the first time studies the problem of cross-technology clock synchronization. Our proposal called Crocs synchronizes WiFi and ZigBee devices by direct cross-technology communication. Crocs decouples the synchronization signal from the transmission of a timestamp. By incorporating a barker-code based beacon for time alignment and cross-technology transmission of timestamps, Crocs achieves robust and accurate synchronization among WiFi and ZigBee devices, with the synchronization error lower than 1 millisecond. We further make attempts to implement different cross-technology communication methods in Crocs and provide insight findings with regard to the achievable accuracy and expected overhead

    Side Channel Attacks on IoT Applications

    Get PDF

    Exploring Potential 6LoWPAN Traffic Side Channels.

    Get PDF
    The Internet of Things (IoT) has become a reality: small connected devices feature in everyday objects including childrens’ toys, TVs, fridges, heating control units, etc. Supply chains feature sensors throughout, and significant investments go into researching next-generation healthcare, where sensors monitor wellbeing. A future in which sensors and other (small) devices interact to create sophisticated applications seems just around the corner. All of these applications have a fundamental need for security and privacy and thus cryptography is deployed as part of an attempt to secure them. In this paper we explore a particular type of flaw, namely side channel information, on the protocol level that can exist despite the use of cryptography. Our research investigates the potential for utilising packet length and timing information (both are easily obtained) to extract interesting information from a system. We find that using these side channels we can distinguish between devices, different programs running on the same device including which sensor is accessed. We also find it is possible to distinguish between different types of ICMP messages despite the use of encryption. Based on our findings, we provide a set of recommendations to efficiently mitigate these side channels in the IoT context.</p

    A Survey on LoRaWAN Technology: Recent Trends, Opportunities, Simulation Tools and Future Directions

    Get PDF
    Low-power wide-area network (LPWAN) technologies play a pivotal role in IoT applications, owing to their capability to meet the key IoT requirements (e.g., long range, low cost, small data volumes, massive device number, and low energy consumption). Between all obtainable LPWAN technologies, long-range wide-area network (LoRaWAN) technology has attracted much interest from both industry and academia due to networking autonomous architecture and an open standard specification. This paper presents a comparative review of five selected driving LPWAN technologies, including NB-IoT, SigFox, Telensa, Ingenu (RPMA), and LoRa/LoRaWAN. The comparison shows that LoRa/LoRaWAN and SigFox surpass other technologies in terms of device lifetime, network capacity, adaptive data rate, and cost. In contrast, NB-IoT technology excels in latency and quality of service. Furthermore, we present a technical overview of LoRa/LoRaWAN technology by considering its main features, opportunities, and open issues. We also compare the most important simulation tools for investigating and analyzing LoRa/LoRaWAN network performance that has been developed recently. Then, we introduce a comparative evaluation of LoRa simulators to highlight their features. Furthermore, we classify the recent efforts to improve LoRa/LoRaWAN performance in terms of energy consumption, pure data extraction rate, network scalability, network coverage, quality of service, and security. Finally, although we focus more on LoRa/LoRaWAN issues and solutions, we introduce guidance and directions for future research on LPWAN technologies

    Exploring Artificial Neural Networks Efficiency in Tiny Wearable Devices for Human Activity Recognition

    Get PDF
    The increasing diffusion of tiny wearable devices and, at the same time, the advent of machine learning techniques that can perform sophisticated inference, represent a valuable opportunity for the development of pervasive computing applications. Moreover, pushing inference on edge devices can in principle improve application responsiveness, reduce energy consumption and mitigate privacy and security issues. However, devices with small size and low-power consumption and factor form, like those dedicated to wearable platforms, pose strict computational, memory, and energy requirements which result in challenging issues to be addressed by designers. The main purpose of this study is to empirically explore this trade-off through the characterization of memory usage, energy consumption, and execution time needed by different types of neural networks (namely multilayer and convolutional neural networks) trained for human activity recognition on board of a typical low-power wearable device. Through extensive experimental results, obtained on a public human activity recognition dataset, we derive Pareto curves that demonstrate the possibility of achieving a 4× reduction in memory usage and a 36× reduction in energy consumption, at fixed accuracy levels, for a multilayer Perceptron network with respect to more sophisticated convolution network model
    corecore