930 research outputs found
Goodbye, ALOHA!
©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The vision of the Internet of Things (IoT) to interconnect and Internet-connect everyday people, objects, and machines poses new challenges in the design of wireless communication networks. The design of medium access control (MAC) protocols has been traditionally an intense area of research due to their high impact on the overall performance of wireless communications. The majority of research activities in this field deal with different variations of protocols somehow based on ALOHA, either with or without listen before talk, i.e., carrier sensing multiple access. These protocols operate well under low traffic loads and low number of simultaneous devices. However, they suffer from congestion as the traffic load and the number of devices increase. For this reason, unless revisited, the MAC layer can become a bottleneck for the success of the IoT. In this paper, we provide an overview of the existing MAC solutions for the IoT, describing current limitations and envisioned challenges for the near future. Motivated by those, we identify a family of simple algorithms based on distributed queueing (DQ), which can operate for an infinite number of devices generating any traffic load and pattern. A description of the DQ mechanism is provided and most relevant existing studies of DQ applied in different scenarios are described in this paper. In addition, we provide a novel performance evaluation of DQ when applied for the IoT. Finally, a description of the very first demo of DQ for its use in the IoT is also included in this paper.Peer ReviewedPostprint (author's final draft
A NOVEL DUAL MODE GATEWAY FOR WIRELESS SENSOR NETWORK AND LTE-A NETWORK CONVERGENCE
In recent years, the number of machine-to-machine (M2M) networks, which do not require direct human intervention, has been increasing at a rapid pace. Meanwhile, the need for a wireless platform to control and monitor these M2M networks, one with both a vast coverage area and a low network deployment cost, continues to be unmet. Mobile cellular networks (MCNs) and wireless sensor networks (WSNs) are emerging as two heterogeneous networks that can meet the challenges of M2M communication through network convergence. In this paper, a model for network convergence between a Long Term Evolution-Advance (LTE-A) cellular network and a WSN is proposed. Qualityof- Service (QoS) issues are assessed by a comparative study of the network delay in tight coupling and loose coupling LTE-A configurations. Simulation results indicate that the network delay in this proposed converged network is acceptable for various M2M applications. Additionally, it is demonstrated through simulation that the energy consumed by the implementation of the proposed protocol is suitable for resource-constrained devices
Ultra-Reliable Low Latency Communication (URLLC) using Interface Diversity
An important ingredient of the future 5G systems will be Ultra-Reliable
Low-Latency Communication (URLLC). A way to offer URLLC without intervention in
the baseband/PHY layer design is to use interface diversity and integrate
multiple communication interfaces, each interface based on a different
technology. In this work, we propose to use coding to seamlessly distribute
coded payload and redundancy data across multiple available communication
interfaces. We formulate an optimization problem to find the payload allocation
weights that maximize the reliability at specific target latency values. In
order to estimate the performance in terms of latency and reliability of such
an integrated communication system, we propose an analysis framework that
combines traditional reliability models with technology-specific latency
probability distributions. Our model is capable to account for failure
correlation among interfaces/technologies. By considering different scenarios,
we find that optimized strategies can in some cases significantly outperform
strategies based on -out-of- erasure codes, where the latter do not
account for the characteristics of the different interfaces. The model has been
validated through simulation and is supported by experimental results.Comment: Accepted for IEEE Transactions on Communication
Performance Analysis and Optimal Access Class Barring Parameter Configuration in LTE-A Networks With Massive M2M Traffic
[EN] Over the coming years, it is expected that the number of machine-to-machine (M2M) devices that communicate through long term evolution advanced (LTE-A) networks will rise significantly for providing ubiquitous information and services. However, LTE-A was devised to handle human-to-human traffic, and its current design is not capable of handling massive M2M communications. Access class barring (ACB) is a congestion control scheme included in the LTE-A standard that aims to spread the accesses of user equipments (UEs) through time so that the signaling capabilities of the evolved Node B are not exceeded. Notwithstanding its relevance, the potential benefits of the implementation of ACB are rarely analyzed accurately. In this paper, we conduct a thorough performance analysis of the LTE-A random access channel and ACB as defined in the 3GPP specifications. Specifically, we seek to enhance the performance of LTE-A in massive M2M scenarios by modifying certain configuration parameters and by the implementation of ACB. We observed that ACB is appropriate for handling sporadic periods of congestion. Concretely, our results reflect that the access success probability of M2M UEs in the most extreme test scenario suggested by the 3GPP improves from approximately 30%, without any congestion control scheme, to 100% by implementing ACB and setting its configuration parameters properly.This work was supported in part by the Ministry of Economy and Competitiveness of Spain under Grants TIN2013-47272-C2-1-R and TEC2015-71932-REDT. The work of L. Tello-Oquendo was supported in part by Programa de Ayudas de Investigacion y Desarrollo (PAID), Universitat Politecnica de Valencia. The work of I. Leyva-Mayorga was supported in part by Grant 383936 CONACYT-Gobierno del Estado de Mexico 2014.Tello-Oquendo, L.; Leyva-Mayorga, I.; Pla, V.; MartĂnez Bauset, J.; Vidal Catalá, JR.; Casares-Giner, V.; Guijarro, L. (2018). Performance Analysis and Optimal Access Class Barring Parameter Configuration in LTE-A Networks With Massive M2M Traffic. IEEE Transactions on Vehicular Technology. 67(4):3505-3520. https://doi.org/10.1109/TVT.2017.2776868S3505352067
Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions
Traditional power grids are being transformed into Smart Grids (SGs) to
address the issues in existing power system due to uni-directional information
flow, energy wastage, growing energy demand, reliability and security. SGs
offer bi-directional energy flow between service providers and consumers,
involving power generation, transmission, distribution and utilization systems.
SGs employ various devices for the monitoring, analysis and control of the
grid, deployed at power plants, distribution centers and in consumers' premises
in a very large number. Hence, an SG requires connectivity, automation and the
tracking of such devices. This is achieved with the help of Internet of Things
(IoT). IoT helps SG systems to support various network functions throughout the
generation, transmission, distribution and consumption of energy by
incorporating IoT devices (such as sensors, actuators and smart meters), as
well as by providing the connectivity, automation and tracking for such
devices. In this paper, we provide a comprehensive survey on IoT-aided SG
systems, which includes the existing architectures, applications and prototypes
of IoT-aided SG systems. This survey also highlights the open issues,
challenges and future research directions for IoT-aided SG systems
PERFORMANCE STUDY FOR CAPILLARY MACHINE-TO-MACHINE NETWORKS
Communication technologies witness a wide and rapid pervasiveness of wireless machine-to-machine (M2M) communications. It is emerging to apply for data transfer among devices without human intervention. Capillary M2M networks represent a candidate for providing reliable M2M connectivity. In this thesis, we propose a wireless network architecture that aims at supporting a wide range of M2M applications (either real-time or non-real-time) with an acceptable QoS level. The architecture uses capillary gateways to reduce the number of devices communicating directly with a cellular network such as LTE. Moreover, the proposed architecture reduces the traffic load on the cellular network by providing capillary gateways with dual wireless interfaces. One interface is connected to the cellular network, whereas the other is proposed to communicate to the intended destination via a WiFi-based mesh backbone for cost-effectiveness. We study the performance of our proposed architecture with the aid of the ns-2 simulator. An M2M capillary network is simulated in different scenarios by varying multiple factors that affect the system performance. The simulation results measure average packet delay and packet loss to evaluate the quality-of-service (QoS) of the proposed architecture. Our results reveal that the proposed architecture can satisfy the required level of QoS with low traffic load on the cellular network. It also outperforms a cellular-based capillary M2M network and WiFi-based capillary M2M network. This implies a low cost of operation for the service provider while meeting a high-bandwidth service level agreement. In addition, we investigate how the proposed architecture behaves with different factors like the number of capillary gateways, different application traffic rates, the number of backbone routers with different routing protocols, the number of destination servers, and the data rates provided by the LTE and Wi-Fi technologies. Furthermore, the simulation results show that the proposed architecture continues to be reliable in terms of packet delay and packet loss even under a large number of nodes and high application traffic rates
- …