1,882 research outputs found

    Packet Size Optimization for Multiple Input Multiple Output Cognitive Radio Sensor Networks aided Internet of Things

    Get PDF
    The determination of Optimal Packet Size (OPS) for Cognitive Radio assisted Sensor Networks (CRSNs) architecture is non-trivial. State of the art in this area describes various complex techniques to determine OPS for CRSNs. However, it is observed that under high interference from the surrounding users, it is not possible to determine a feasible optimal packet size of data transmission under the simple point-to-point CRSN network topology. This is contributed primarily due to the peak transmit power constraint of the cognitive nodes. To address this specific challenge, this paper proposes a Multiple Input Multiple Output based Cognitive Radio Sensor Networks (MIMO-CRSNs) architecture for futuristic technologies like Internet of Things (IoT) and machine-to-machine (M2M) communications. A joint optimization problem is formulated taking into account network constraints like the overall end to end latency, interference duration caused to the non-cognitive users, average BER and transmit power.We propose our Algorithm-1 based on generic exhaustive search technique blue to solve the optimization problem. Furthermore, a low complexity suboptimal Algorithm-2 based on solving classical Karush-Kuhn-Tucker (KKT) conditions is proposed. These algorithms for MIMO-CRSNs are implemented in conjunction with two different channel access schemes. These channel access schemes are Time Slotted Distributed Cognitive Medium Access Control denoted as MIMO-DTS-CMAC and CSMA/CA assisted Centralized Common Control Channel based Cognitive Medium Access Control denoted as MIMO-CC-CMAC. Simulations reveal that the proposed MIMO based CRSN network outperforms the conventional point-to-point CRSN network in terms of overall energy consumption. Moreover, the proposed Algorithm-1 and Algorithm2 shows perfect match and the implementation complexity of Algorithm-2 is much lesser than Algorithm-1. Algorithm-1 takes almost 680 ms to execute and provides OPS value for a given number of users while Algorithm- 2 takes 4 to 5 ms on an average to find the optimal packet size for the proposed MIMO-CRSN framework

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions

    Full text link
    Traditional power grids are being transformed into Smart Grids (SGs) to address the issues in existing power system due to uni-directional information flow, energy wastage, growing energy demand, reliability and security. SGs offer bi-directional energy flow between service providers and consumers, involving power generation, transmission, distribution and utilization systems. SGs employ various devices for the monitoring, analysis and control of the grid, deployed at power plants, distribution centers and in consumers' premises in a very large number. Hence, an SG requires connectivity, automation and the tracking of such devices. This is achieved with the help of Internet of Things (IoT). IoT helps SG systems to support various network functions throughout the generation, transmission, distribution and consumption of energy by incorporating IoT devices (such as sensors, actuators and smart meters), as well as by providing the connectivity, automation and tracking for such devices. In this paper, we provide a comprehensive survey on IoT-aided SG systems, which includes the existing architectures, applications and prototypes of IoT-aided SG systems. This survey also highlights the open issues, challenges and future research directions for IoT-aided SG systems

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases

    Predicting lorawan behavior. How machine learning can help

    Get PDF
    Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases
    corecore