57 research outputs found

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    A comprehensive simulation analysis of LTE Discontinuous Reception (DRX)

    Get PDF
    In an LTE cell, Discontinuous Reception (DRX) allows the central base station to configure User Equipments for periodic wake/sleep cycles, so as to save energy. DRX operations depend on several parameters, which can be tuned to achieve optimal performance with different traffic profiles (i.e., CBR vs. bursty, periodic vs. sporadic, etc.). This work investigates how to configure these parameters and explores the trade-off between power saving, on one side, and per-user QoS, on the other. Unlike previous work, chiefly based on analytical models neglecting key aspects of LTE, our evaluation is carried out via simulation. We use a fully-fledged packet simulator, which includes models of all the protocol stack, the applications and the relevant QoS metrics, and employ factorial analysis to assess the impact of the many simulation factors in a statistically rigorous way. This allows us to analyze a wider spectrum of scenarios, assessing the interplay of the LTE mechanisms and DRX, and to derive configuration guidelines

    QOS downlink schedulers in LTE towards 5G network

    Get PDF
    LTE is expected to be the dominant system used by operators in these years due to its promising solutions for achieving high capacity and data rate. However, LTE packet scheduling and distributing resources among users is still the main challenge due to unfairness and low performance which occur when allocating resources to users. In this paper, the above mentioned challenges are studied and analysed, focusing on three schedulers; they are Proportional Fair (PF), Maximum Throughput (MT) and Blind equal throughput (BET). These methods do not provide QoS to users that use different types of traffic flows. The proposed algorithm in this paper is to modify the PF scheduler in order to fulfil the QoS criteria maximizing throughput and minimizing the delay for real time service. VoIP and video have been selected as real time traffic and best effort as non-real time. LTE-Sim simulator is used to compare between the mentioned schedulers in terms of throughput, delay, packet loss ratio and spectrum efficiency

    On Supporting Small M2M Data Transmissions in LTE/LTE-A Networks

    Get PDF
    In Machine-to-Machine (M2M) applications, devices monitor events (e.g., temperature, inventory level), which is relayed through a communication network infrastructure (e.g. Internet, LTE) to an application (software program running on a server connected to the Internet), that translates the monitored event into some meaningful information to be able to take collaborative decisions with limited or no human intervention. With the availability of IPv6 address, it is possible to interconnect everything in this universe. By using the concept of interconnecting things, several applications can be envisioned to make the world smarter. Internet of Things (IoT) is a paradigm whose aim is to implement the concept of interconnection of everything by using all possible technologies and others means. M2M communica- tion is one of the components of Internet of Things (IoT) whose goal is to make the communication smooth and seamless between any two networking enabled devices. According to the researchers by the end of 2014, 1.5 billion devices and by the end of 2020, 20 billion devices will be part of M2M communication

    Scheduling M2M traffic over LTE uplink of a dense small cell network

    Get PDF
    We present an approach to schedule Long Term Evolution (LTE) uplink (UL) Machine-to-Machine (M2M) traffic in a densely deployed heterogeneous network, over the street lights of a big boulevard for smart city applications. The small cells operate with frequency reuse 1, and inter-cell interference (ICI) is a critical issue to manage. We consider a 3rd Generation Partnership Project (3GPP) compliant scenario, where single-carrier frequency-division multiple access (SC-FDMA) is selected as the multiple access scheme, which requires that all resource blocks (RBs) allocated to a single user have to be contiguous in the frequency within each time slot. This adjacency constraint limits the flexibility of the frequency-domain packet scheduling (FDPS) and inter-cell interference coordination (ICIC), when trying to maximize the scheduling objectives, and this makes the problem NP-hard. We aim to solve a multi-objective optimization problem, to maximize the overall throughput, maximize the radio resource usage and minimize the ICI. This can be modelled through a mixed-integer linear programming (MILP) and solved through a heuristic implementable in the standards. We propose two models. The first one allocates resources based on the three optimization criteria, while the second model is more compact and is demonstrated through numerical evaluation in CPLEX, to be equivalent in the complexity, while it performs better and executes faster. We present simulation results in a 3GPP compliant network simulator, implementing the overall protocol stack, which support the effectiveness of our algorithm, for different M2M applications, with respect to the state-of-the-art approaches

    Cellular networks for smart grid communication

    Get PDF
    The next-generation electric power system, known as smart grid, relies on a robust and reliable underlying communication infrastructure to improve the efficiency of electricity distribution. Cellular networks, e.g., LTE/LTE-A systems, appear as a promising technology to facilitate the smart grid evolution. Their inherent performance characteristics and well-established ecosystem could potentially unlock unprecedented use cases, enabling real-time and autonomous distribution grid operations. However, cellular technology was not originally intended for smart grid communication, associated with highly-reliable message exchange and massive device connectivity requirements. The fundamental differences between smart grid and human-type communication challenge the classical design of cellular networks and introduce important research questions that have not been sufficiently addressed so far. Motivated by these challenges, this doctoral thesis investigates novel radio access network (RAN) design principles and performance analysis for the seamless integration of smart grid traffic in future cellular networks. Specifically, we focus on addressing the fundamental RAN problems of network scalability in massive smart grid deployments and radio resource management for smart grid and human-type traffic. The main objective of the thesis lies on the design, analysis and performance evaluation of RAN mechanisms that would render cellular networks the key enabler for emerging smart grid applications. The first part of the thesis addresses the radio access limitations in LTE-based networks for reliable and scalable smart grid communication. We first identify the congestion problem in LTE random access that arises in large-scale smart grid deployments. To overcome this, a novel random access mechanism is proposed that can efficiently support real-time distribution automation services with negligible impact on the background traffic. Motivated by the stringent reliability requirements of various smart grid operations, we then develop an analytical model of the LTE random access procedure that allows us to assess the performance of event-based monitoring traffic under various load conditions and network configurations. We further extend our analysis to include the relation between the cell size and the availability of orthogonal random access resources and we identify an additional challenge for reliable smart grid connectivity. To this end, we devise an interference- and load-aware cell planning mechanism that enhances reliability in substation automation services. Finally, we couple the problem of state estimation in wide-area monitoring systems with the reliability challenges in information acquisition. Using our developed analytical framework, we quantify the impact of imperfect communication reliability in the state estimation accuracy and we provide useful insights for the design of reliability-aware state estimators. The second part of the thesis builds on the previous one and focuses on the RAN problem of resource scheduling and sharing for smart grid and human-type traffic. We introduce a novel scheduler that achieves low latency for distribution automation traffic while resource allocation is performed in a way that keeps the degradation of cellular users at a minimum level. In addition, we investigate the benefits of Device-to-Device (D2D) transmission mode for event-based message exchange in substation automation scenarios. We design a joint mode selection and resource allocation mechanism which results in higher data rates with respect to the conventional transmission mode via the base station. An orthogonal resource partition scheme between cellular and D2D links is further proposed to prevent the underutilization of the scarce cellular spectrum. The research findings of this thesis aim to deliver novel solutions to important RAN performance issues that arise when cellular networks support smart grid communication.Las redes celulares, p.e., los sistemas LTE/LTE-A, aparecen como una tecnología prometedora para facilitar la evolución de la próxima generación del sistema eléctrico de potencia, conocido como smart grid (SG). Sin embargo, la tecnología celular no fue pensada originalmente para las comunicaciones en la SG, asociadas con el intercambio fiable de mensajes y con requisitos de conectividad de un número masivo de dispositivos. Las diferencias fundamentales entre las comunicaciones en la SG y la comunicación de tipo humano desafían el diseño clásico de las redes celulares e introducen importantes cuestiones de investigación que hasta ahora no se han abordado suficientemente. Motivada por estos retos, esta tesis doctoral investiga los principios de diseño y analiza el rendimiento de una nueva red de acceso radio (RAN) que permita una integración perfecta del tráfico de la SG en las redes celulares futuras. Nos centramos en los problemas fundamentales de escalabilidad de la RAN en despliegues de SG masivos, y en la gestión de los recursos radio para la integración del tráfico de la SG con el tráfico de tipo humano. El objetivo principal de la tesis consiste en el diseño, el análisis y la evaluación del rendimiento de los mecanismos de las RAN que convertirán a las redes celulares en el elemento clave para las aplicaciones emergentes de las SGs. La primera parte de la tesis aborda las limitaciones del acceso radio en redes LTE para la comunicación fiable y escalable en SGs. En primer lugar, identificamos el problema de congestión en el acceso aleatorio de LTE que aparece en los despliegues de SGs a gran escala. Para superar este problema, se propone un nuevo mecanismo de acceso aleatorio que permite soportar de forma eficiente los servicios de automatización de la distribución eléctrica en tiempo real, con un impacto insignificante en el tráfico de fondo. Motivados por los estrictos requisitos de fiabilidad de las diversas operaciones en la SG, desarrollamos un modelo analítico del procedimiento de acceso aleatorio de LTE que nos permite evaluar el rendimiento del tráfico de monitorización de la red eléctrica basado en eventos bajo diversas condiciones de carga y configuraciones de red. Además, ampliamos nuestro análisis para incluir la relación entre el tamaño de celda y la disponibilidad de recursos de acceso aleatorio ortogonales, e identificamos un reto adicional para la conectividad fiable en la SG. Con este fin, diseñamos un mecanismo de planificación celular que tiene en cuenta las interferencias y la carga de la red, y que mejora la fiabilidad en los servicios de automatización de las subestaciones eléctricas. Finalmente, combinamos el problema de la estimación de estado en sistemas de monitorización de redes eléctricas de área amplia con los retos de fiabilidad en la adquisición de la información. Utilizando el modelo analítico desarrollado, cuantificamos el impacto de la baja fiabilidad en las comunicaciones sobre la precisión de la estimación de estado. La segunda parte de la tesis se centra en el problema de scheduling y compartición de recursos en la RAN para el tráfico de SG y el tráfico de tipo humano. Presentamos un nuevo scheduler que proporciona baja latencia para el tráfico de automatización de la distribución eléctrica, mientras que la asignación de recursos se realiza de un modo que mantiene la degradación de los usuarios celulares en un nivel mínimo. Además, investigamos los beneficios del modo de transmisión Device-to-Device (D2D) en el intercambio de mensajes basados en eventos en escenarios de automatización de subestaciones eléctricas. Diseñamos un mecanismo conjunto de asignación de recursos y selección de modo que da como resultado tasas de datos más elevadas con respecto al modo de transmisión convencional a través de la estación base. Finalmente, se propone un esquema de partición de recursos ortogonales entre enlaces celulares y D2Postprint (published version

    A Survey of Scheduling in 5G URLLC and Outlook for Emerging 6G Systems

    Get PDF
    Future wireless communication is expected to be a paradigm shift from three basic service requirements of 5th Generation (5G) including enhanced Mobile Broadband (eMBB), Ultra Reliable and Low Latency communication (URLLC) and the massive Machine Type Communication (mMTC). Integration of the three heterogeneous services into a single system is a challenging task. The integration includes several design issues including scheduling network resources with various services. Specially, scheduling the URLLC packets with eMBB and mMTC packets need more attention as it is a promising service of 5G and beyond systems. It needs to meet stringent Quality of Service (QoS) requirements and is used in time-critical applications. Thus through understanding of packet scheduling issues in existing system and potential future challenges is necessary. This paper surveys the potential works that addresses the packet scheduling algorithms for 5G and beyond systems in recent years. It provides state of the art review covering three main perspectives such as decentralised, centralised and joint scheduling techniques. The conventional decentralised algorithms are discussed first followed by the centralised algorithms with specific focus on single and multi-connected network perspective. Joint scheduling algorithms are also discussed in details. In order to provide an in-depth understanding of the key scheduling approaches, the performances of some prominent scheduling algorithms are evaluated and analysed. This paper also provides an insight into the potential challenges and future research directions from the scheduling perspective

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Towards Tactile Internet in Beyond 5G Era: Recent Advances, Current Issues and Future Directions

    Get PDF
    Tactile Internet (TI) is envisioned to create a paradigm shift from the content-oriented communications to steer/control-based communications by enabling real-time transmission of haptic information (i.e., touch, actuation, motion, vibration, surface texture) over Internet in addition to the conventional audiovisual and data traffics. This emerging TI technology, also considered as the next evolution phase of Internet of Things (IoT), is expected to create numerous opportunities for technology markets in a wide variety of applications ranging from teleoperation systems and Augmented/Virtual Reality (AR/VR) to automotive safety and eHealthcare towards addressing the complex problems of human society. However, the realization of TI over wireless media in the upcoming Fifth Generation (5G) and beyond networks creates various non-conventional communication challenges and stringent requirements in terms of ultra-low latency, ultra-high reliability, high data-rate connectivity, resource allocation, multiple access and quality-latency-rate tradeoff. To this end, this paper aims to provide a holistic view on wireless TI along with a thorough review of the existing state-of-the-art, to identify and analyze the involved technical issues, to highlight potential solutions and to propose future research directions. First, starting with the vision of TI and recent advances and a review of related survey/overview articles, we present a generalized framework for wireless TI in the Beyond 5G Era including a TI architecture, the main technical requirements, the key application areas and potential enabling technologies. Subsequently, we provide a comprehensive review of the existing TI works by broadly categorizing them into three main paradigms; namely, haptic communications, wireless AR/VR, and autonomous, intelligent and cooperative mobility systems. Next, potential enabling technologies across physical/Medium Access Control (MAC) and network layers are identified and discussed in detail. Also, security and privacy issues of TI applications are discussed along with some promising enablers. Finally, we present some open research challenges and recommend promising future research directions
    corecore