2,160 research outputs found

    Packet Size Optimization for Cognitive Radio Sensor Networks Aided Internet of Things

    Get PDF
    Cognitive Radio Sensor Networks (CRSN) is state of the art communication paradigm for power constrained short range data communication. It is one of the potential technology adopted for Internet of Things (IoT) and other futuristic Machine to Machine (M2M) based applications. Many of these applications are power constrained and delay sensitive. Therefore, CRSN architecture must be coupled with different adaptive and robust communication schemes to take care of the delay and energy-efficiency at the same time. Considering the tradeoff that exists in terms of energy efficiency and overhead delay for a given data packet length, it is proposed to transmit the physical layer payload with an optimal packet size (OPS) depending on the network condition. Furthermore, due to the cognitive feature of CRSN architecture overhead energy consumption due to channel sensing and channel handoff plays a critical role. Based on the above premises, in this paper we propose a heuristic exhaustive search based Algorithm-1 and a computationally efficient suboptimal low complexity Karuh-Kuhn- Tucker (KKT) condition based Algorithm-2 to determine the optimal packet size in CRSN architecture using variable rate m-QAM modulation. The proposed algorithms are implemented along with two main cognitive radio assisted channel access strategies based on Distributed Time Slotted-Cognitive Medium Access Control (DTS-CMAC) and Centralized Common Control Channel based Cognitive Medium Access Control (CC-CMAC) and their performances are compared. The simulation results reveals that proposed Algorithm-2 outperforms Algorithm-1 by a significant margin in terms of its implementation time. For the exhaustive search based Algorithm-1 the average time consumed to determine OPS for a given number of cognitive users is 1.2 seconds while for KKT based Algorithm-2 it is of the order of 5 to 10 ms. CC-CMAC with OPS is most efficient in terms of overall energy consumption but incurs more delay as compared to DTS-CMAC with OPS scheme

    Packet Size Optimization for Multiple Input Multiple Output Cognitive Radio Sensor Networks aided Internet of Things

    Get PDF
    The determination of Optimal Packet Size (OPS) for Cognitive Radio assisted Sensor Networks (CRSNs) architecture is non-trivial. State of the art in this area describes various complex techniques to determine OPS for CRSNs. However, it is observed that under high interference from the surrounding users, it is not possible to determine a feasible optimal packet size of data transmission under the simple point-to-point CRSN network topology. This is contributed primarily due to the peak transmit power constraint of the cognitive nodes. To address this specific challenge, this paper proposes a Multiple Input Multiple Output based Cognitive Radio Sensor Networks (MIMO-CRSNs) architecture for futuristic technologies like Internet of Things (IoT) and machine-to-machine (M2M) communications. A joint optimization problem is formulated taking into account network constraints like the overall end to end latency, interference duration caused to the non-cognitive users, average BER and transmit power.We propose our Algorithm-1 based on generic exhaustive search technique blue to solve the optimization problem. Furthermore, a low complexity suboptimal Algorithm-2 based on solving classical Karush-Kuhn-Tucker (KKT) conditions is proposed. These algorithms for MIMO-CRSNs are implemented in conjunction with two different channel access schemes. These channel access schemes are Time Slotted Distributed Cognitive Medium Access Control denoted as MIMO-DTS-CMAC and CSMA/CA assisted Centralized Common Control Channel based Cognitive Medium Access Control denoted as MIMO-CC-CMAC. Simulations reveal that the proposed MIMO based CRSN network outperforms the conventional point-to-point CRSN network in terms of overall energy consumption. Moreover, the proposed Algorithm-1 and Algorithm2 shows perfect match and the implementation complexity of Algorithm-2 is much lesser than Algorithm-1. Algorithm-1 takes almost 680 ms to execute and provides OPS value for a given number of users while Algorithm- 2 takes 4 to 5 ms on an average to find the optimal packet size for the proposed MIMO-CRSN framework

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions

    Full text link
    Traditional power grids are being transformed into Smart Grids (SGs) to address the issues in existing power system due to uni-directional information flow, energy wastage, growing energy demand, reliability and security. SGs offer bi-directional energy flow between service providers and consumers, involving power generation, transmission, distribution and utilization systems. SGs employ various devices for the monitoring, analysis and control of the grid, deployed at power plants, distribution centers and in consumers' premises in a very large number. Hence, an SG requires connectivity, automation and the tracking of such devices. This is achieved with the help of Internet of Things (IoT). IoT helps SG systems to support various network functions throughout the generation, transmission, distribution and consumption of energy by incorporating IoT devices (such as sensors, actuators and smart meters), as well as by providing the connectivity, automation and tracking for such devices. In this paper, we provide a comprehensive survey on IoT-aided SG systems, which includes the existing architectures, applications and prototypes of IoT-aided SG systems. This survey also highlights the open issues, challenges and future research directions for IoT-aided SG systems

    A critical analysis of research potential, challenges and future directives in industrial wireless sensor networks

    Get PDF
    In recent years, Industrial Wireless Sensor Networks (IWSNs) have emerged as an important research theme with applications spanning a wide range of industries including automation, monitoring, process control, feedback systems and automotive. Wide scope of IWSNs applications ranging from small production units, large oil and gas industries to nuclear fission control, enables a fast-paced research in this field. Though IWSNs offer advantages of low cost, flexibility, scalability, self-healing, easy deployment and reformation, yet they pose certain limitations on available potential and introduce challenges on multiple fronts due to their susceptibility to highly complex and uncertain industrial environments. In this paper a detailed discussion on design objectives, challenges and solutions, for IWSNs, are presented. A careful evaluation of industrial systems, deadlines and possible hazards in industrial atmosphere are discussed. The paper also presents a thorough review of the existing standards and industrial protocols and gives a critical evaluation of potential of these standards and protocols along with a detailed discussion on available hardware platforms, specific industrial energy harvesting techniques and their capabilities. The paper lists main service providers for IWSNs solutions and gives insight of future trends and research gaps in the field of IWSNs

    Deep Reinforcement Learning for Joint Cruise Control and Intelligent Data Acquisition in UAVs-Assisted Sensor Networks

    Full text link
    Unmanned aerial vehicle (UAV)-assisted sensor networks (UASNets), which play a crucial role in creating new opportunities, are experiencing significant growth in civil applications worldwide. UASNets improve disaster management through timely surveillance and advance precision agriculture with detailed crop monitoring, thereby significantly transforming the commercial economy. UASNets revolutionize the commercial sector by offering greater efficiency, safety, and cost-effectiveness, highlighting their transformative impact. A fundamental aspect of these new capabilities and changes is the collection of data from rugged and remote areas. Due to their excellent mobility and maneuverability, UAVs are employed to collect data from ground sensors in harsh environments, such as natural disaster monitoring, border surveillance, and emergency response monitoring. One major challenge in these scenarios is that the movements of UAVs affect channel conditions and result in packet loss. Fast movements of UAVs lead to poor channel conditions and rapid signal degradation, resulting in packet loss. On the other hand, slow mobility of a UAV can cause buffer overflows of the ground sensors, as newly arrived data is not promptly collected by the UAV. Our proposal to address this challenge is to minimize packet loss by jointly optimizing the velocity controls and data collection schedules of multiple UAVs.Furthermore, in UASNets, swift movements of UAVs result in poor channel conditions and fast signal attenuation, leading to an extended age of information (AoI). In contrast, slow movements of UAVs prolong flight time, thereby extending the AoI of ground sensors.To address this challenge, we propose a new mean-field flight resource allocation optimization to minimize the AoI of sensory data

    Relaying in the Internet of Things (IoT): A Survey

    Get PDF
    The deployment of relays between Internet of Things (IoT) end devices and gateways can improve link quality. In cellular-based IoT, relays have the potential to reduce base station overload. The energy expended in single-hop long-range communication can be reduced if relays listen to transmissions of end devices and forward these observations to gateways. However, incorporating relays into IoT networks faces some challenges. IoT end devices are designed primarily for uplink communication of small-sized observations toward the network; hence, opportunistically using end devices as relays needs a redesign of both the medium access control (MAC) layer protocol of such end devices and possible addition of new communication interfaces. Additionally, the wake-up time of IoT end devices needs to be synchronized with that of the relays. For cellular-based IoT, the possibility of using infrastructure relays exists, and noncellular IoT networks can leverage the presence of mobile devices for relaying, for example, in remote healthcare. However, the latter presents problems of incentivizing relay participation and managing the mobility of relays. Furthermore, although relays can increase the lifetime of IoT networks, deploying relays implies the need for additional batteries to power them. This can erode the energy efficiency gain that relays offer. Therefore, designing relay-assisted IoT networks that provide acceptable trade-offs is key, and this goes beyond adding an extra transmit RF chain to a relay-enabled IoT end device. There has been increasing research interest in IoT relaying, as demonstrated in the available literature. Works that consider these issues are surveyed in this paper to provide insight into the state of the art, provide design insights for network designers and motivate future research directions
    corecore