5 research outputs found

    Extending the Lora modulation to add further parallel channels and improve the LoRaWAN network performance

    Full text link
    In this paper we present a new modulation, called DLoRa, similar in principle to the conventional LoRa modulation and compatible with it in terms of bandwidth and numerology. DLoRa departs from the conventional LoRa modulation as it is using a decreasing instantaneous frequency in the chirps instead of an increasing one as for the conventional LoRa modulation. Furthermore we describe a software environment to accurately evaluate the "isolation" of the different virtual channels created both by LoRa and DLoRa when using different Spreading Factors. Our results are in agreement with the ones present in literature for the conventional LoRa modulation and show that it is possible to double the number of channels by using simultaneously LoRa and DLora. The higher (double) number of subchannels available is the key to improve the network level performance of LoRa based networks.Comment: This work has been submitted on Feb.1 2020 to European Wireless 2020 conference for possible presentation and subsequent publication by the IEE

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases

    The SF12 well in LoRaWAN: problem and end-device-based solutions

    Get PDF
    © 2021 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/)LoRaWAN has become a popular technology for the Internet of Things (IoT) device connectivity. One of the expected properties of LoRaWAN is high network scalability. However, LoRaWAN network performance may be compromised when even a relatively small number of devices use link-layer reliability. After failed frame delivery, such devices typically tend to reduce their physical layer bit rate by increasing their spreading factor (SF). This reaction increases channel utilization, which may further degrade network performance, even into congestion collapse. When this problem arises, all the devices performing reliable frame transmission end up using SF12 (i.e., the highest SF in LoRaWAN). In this paper, we identify and characterize the described network condition, which we call the SF12 Well, in a range of scenarios and by means of extensive simulations. The results show that by using alternative SF-management techniques it is possible to avoid the problem, while achieving a packet delivery ratio increase of up to a factor of 4.7.Postprint (published version

    Capture Aware Sequential Waterfilling for LoRaWAN Adaptive Data Rate

    Get PDF
    LoRaWAN (Long Range Wide Area Network) is an attractive network infrastructure and protocol suite for ultra low power Internet of Things devices. Even if the technology itself is quite mature and specified, the currently deployed wireless resource allocation strategies are still coarse and based on rough heuristics. This paper proposes an innovative “sequential waterfilling” strategy for assigning spreading factors to End Devices. Our design relies on three complementary approaches: i) equalize the Time-on-Air of packets transmitted by the system’s End Devices in each spreading factor’s group; ii) balance the spreading factors across multiple gateways and iii) keep into account the channel capture, which our experimental results show to be very substantial in LoRa. While retaining an extremely simple and scalable implementation, this strategy yields a significant improvement (up to 38%) in the network capacity over the Adaptive Data Rate used by many network operators on the basis of the design suggested by Semtech, and appears to be extremely robust to different operating/load conditions and network topology configurations
    corecore