49 research outputs found

    Dynamic Bandwidth Allocation in Heterogeneous OFDMA-PONs Featuring Intelligent LTE-A Traffic Queuing

    Get PDF
    This work was supported by the ACCORDANCE project, through the 7th ICT Framework Programme. This is an Accepted Manuscript of an article accepted for publication in Journal of Lightwave Technology following peer review. © 2014 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.A heterogeneous, optical/wireless dynamic bandwidth allocation framework is presented, exhibiting intelligent traffic queuing for practically controlling the quality-of-service (QoS) of mobile traffic, backhauled via orthogonal frequency division multiple access–PON (OFDMA-PON) networks. A converged data link layer is presented between long term evolution-advanced (LTE-A) and next-generation passive optical network (NGPON) topologies, extending beyond NGPON2. This is achieved by incorporating in a new protocol design, consistent mapping of LTE-A QCIs and OFDMA-PON queues. Novel inter-ONU algorithms have been developed, based on the distribution of weights to allocate subcarriers to both enhanced node B/optical network units (eNB/ONUs) and residential ONUs, sharing the same infrastructure. A weighted, intra-ONU scheduling mechanism is also introduced to control further the QoS across the network load. The inter and intra-ONU algorithms are both dynamic and adaptive, providing customized solutions to bandwidth allocation for different priority queues at different network traffic loads exhibiting practical fairness in bandwidth distribution. Therefore, middle and low priority packets are not unjustifiably deprived in favor of high priority packets at low network traffic loads. Still the protocol adaptability allows the high priority queues to automatically over perform when the traffic load has increased and the available bandwidth needs to be rationally redistributed. Computer simulations have confirmed that following the application of adaptive weights the fairness index of the new scheme (representing the achieved throughput for each queue), has improved across the traffic load to above 0.9. Packet delay reduction of more than 40ms has been recorded as a result for the low priority queues, while high priories still achieve sufficiently low packet delays in the range of 20 to 30msPeer reviewe

    Mobility Management for Small Cells in LTE-A Networks

    Get PDF
    Katedra telekomunikaÄŤnĂ­ technik

    Mobility management in 5G heterogeneous networks

    Get PDF
    In recent years, mobile data traffic has increased exponentially as a result of widespread popularity and uptake of portable devices, such as smartphones, tablets and laptops. This growth has placed enormous stress on network service providers who are committed to offering the best quality of service to consumer groups. Consequently, telecommunication engineers are investigating innovative solutions to accommodate the additional load offered by growing numbers of mobile users. The fifth generation (5G) of wireless communication standard is expected to provide numerous innovative solutions to meet the growing demand of consumer groups. Accordingly the ultimate goal is to achieve several key technological milestones including up to 1000 times higher wireless area capacity and a significant cut in power consumption. Massive deployment of small cells is likely to be a key innovation in 5G, which enables frequent frequency reuse and higher data rates. Small cells, however, present a major challenge for nodes moving at vehicular speeds. This is because the smaller coverage areas of small cells result in frequent handover, which leads to lower throughput and longer delay. In this thesis, a new mobility management technique is introduced that reduces the number of handovers in a 5G heterogeneous network. This research also investigates techniques to accommodate low latency applications in nodes moving at vehicular speeds

    TD-SCDMA Relay Networks

    Get PDF
    PhDWhen this research was started, TD-SCDMA (Time Division Synchronous Code Division Multiple Access) was still in the research/ development phase, but now, at the time of writing this thesis, it is in commercial use in 10 large cities in China including Beijing and Shang Hai. In all of these cities HSDPA is enabled. The roll-out of the commercial deployment is progressing fast with installations in another 28 cities being underway now. However, during the pre-commercial TD-SCDM trail in China, which started from year 2006, some interference problems have been noticed especially in the network planning and initialization phases. Interference is always an issue in any network and the goal of the work reported in this thesis is to improve network coverage and capacity in the presence of interference. Based on an analysis of TD-SCDMA issues and how network interference arises, this thesis proposes two enhancements to the network in addition to the standard N-frequency technique. These are (i) the introduction of the concentric circle cell concept and (ii) the addition of a relay network that makes use of other users at the cell boundary. This overall approach not only optimizes the resilience to interference but increases the network coverage without adding more Node Bs. Based on the cell planning parameters from the research, TD-SCDMA HSDPA services in dense urban area and non-HSDPA services in rural areas were simulated to investigate the network performance impact after introducing the relay network into a TD-SCDMA network. The results for HSDPA applications show significant improvement in the TDSCDMA relay network both for network capacity and network interference aspects compared to standard TD-SCDMA networks. The results for non- HSDPA service show that although the network capacity has not changed after adding in the relay network (due to the code limitation in TD-SCDMA), the TD-SCDMA relay network has better interference performance and greater coverage

    Admission control and resource allocation for LTE uplink systems

    Get PDF
    Long Term Evolution (LTE) radio technologies aim not only to increase the capacity of mobile telephone networks, but also to provide high throughput, low latency, an improved end-to-end Quality of Service (QoS) and a simple architecture. The Third Generation Partnership Project (3GPP) has defined Single Carrier FDMA (SC-FDMA) as the access technique for the uplink and Orthogonal Frequency Division Multiple Access (OFDMA) for the downlink. It is well known that scheduling and admission control play an important role for QoS provisioning, and that they are strongly related. Knowing that we can take full advantage of this property we can design an admission control mechanism that uses the design criterion of the scheduling scheme. In this thesis, we developed two new algorithms for handling single-class resource allocation and two algorithms for handling multi-class resource allocation, as well as a new admission control scheme for handling multi-class Grade of Service (GoS) and QoS in uplink LTE systems. We also present a combined solution that uses the resource allocation and the admission control properties to satisfy the GoS and QoS requirements. System performance is evaluated using simulations. Numerical results show that the proposed scheduling algorithms can handle multi-class QoS in LTE uplink systems with a little increase in complexity, and can be used in conjunction with admission control to meet the LTE requirements. In addition, the proposed admission control algorithm gain for the most sensitive traffic can be increased without sacrificing the overall system capacity. At the same time, guaranteeing GoS and maintaining the basic QoS requirements for all the admitted requests

    Cooperative Resource Allocation in Wireless Communication Networks

    Get PDF
    The concept of cooperation where two or more parties work together to pursue a common goal, is applicable in almost every aspect of today's life. For instance, in the upcoming car-to-car communications, the vehicles exchange information regarding their current status and potential threats on the road in order to avoid accidents. With the evolution of the wireless communication systems and the advent of new services and devices with more capabilities, the demand for higher data rates is ever increasing. In cellular networks, the achievable data rates of the users are limited by the inter-cell interference, which is caused by the simultaneous utilization of the time/frequency resources. Especially, the data rates of the users located at the vicinity of neighboring base stations is affected by the inter-cell interference. Hence, in this dissertation, cooperation in cellular communication downlink networks is investigated, where the base stations coordinate their operation in order to mitigate the impact of co-channel inter-cell interference. Thus, the constantly increasing user demand can be satisfied. Cooperative resource allocation schemes are derived, where practical conditions and side constraints regarding the available channel state information at the base stations are taken into account. Cooperation in the form of power control and joint time/frequency scheduling is mainly studied. In the former type of cooperation, the base stations dynamically adjust their own transmit powers to cause less inter-cell interference to the users connected to neighboring base stations. In the case of cooperative scheduling, the available time/frequency resources are jointly allocated by the base stations in order to trade off user throughput and inter-cell interference. The cooperative scheduling schemes apply two special cases of the power control approach, where the base stations either serve their connected users with maximum transmit power, or abstain from transmitting data, i.e., muting, in order to reduce the interference caused to users served by neighboring base stations. One major contribution of this work is the formulation of the cooperative resource allocation problems by considering the availability of channel state information at the transmitter in form of data rate measurement reports, which follows standard compliant procedures of current mobile networks such as LTE and LTE-Advanced. From a system perspective, two parameters are considered throughout this dissertation in order to derive the proposed cooperative schemes. These parameters are the cooperation architecture and the traffic model characterizing the demand of the connected users. In the case of the cooperation architecture, centralized and decentralized schemes are studied. In the former, a central controller performs the cooperative schemes based on global knowledge of the channel state information, and in the latter, the cooperative decisions are carried out independently per base station based on local information exchanged with adjacent base stations. It is expected that the centralized architecture provides the best performance, however, the gap with respect to the decentralized approaches reduces significantly under practical network assumptions, as demonstrated in this work based on numerical simulations. With respect to the traffic model, the user demand is characterized by full-buffer and non-full-buffer models. The first model is applied in order to assess the performance of the proposed cooperative schemes from a capacity enhancement perspective, where all users constantly demand as much data as possible. On the other hand, the non-full-buffer model represents a more practical network scenario with a dynamic utilization of the network resources. In the non-full-buffer model case, the proposed schemes are derived in order to improve the link adaptation procedures at the base stations serving users with bursty traffic. These link adaptation procedures, establish the transmission parameters used per serving link, e.g., the transmit power, the modulation and the coding schemes. Specifically, a cooperative power control scheme with closed-form solution is derived, where base stations dynamically control their own transmit powers to satisfy the data rate requirements of the users connected to neighboring base stations. Moreover, centralized and decentralized coordinated scheduling with muting is studied to improve the user throughput. For the centralized case, an integer linear problem formulation is proposed which is solved optimally by using commercial solvers. The optimal solution is used as a benchmark to evaluate heuristic algorithms. In the case of decentralized coordinated scheduling with muting, a heuristic approach is derived which requires a low number of messages exchanged between the base stations in order to coordinate the cooperation. Finally, an integer linear problem is formulated to improve the link adaptation procedures of networks with user demand characterized by bursty traffic. This improvement results in a reduction of the transmission error rates and an increase of the experienced data rates. With respect to non-cooperative approaches and state-of-the-art solutions, significant performance improvement of the achievable user throughput is obtained as the result of applying the proposed cooperative schemes, especially for the users experiencing severe inter-cell interference

    On the Feasibility of Utilizing Commercial 4G LTE Systems for Misson-Critical IoT Applications

    Full text link
    Emerging Internet of Things (IoT) applications and services including e-healthcare, intelligent transportation systems, smart grid, and smart homes to smart cities to smart workplace, are poised to become part of every aspect of our daily lives. The IoT will enable billions of sensors, actuators, and smart devices to be interconnected and managed remotely via the Internet. Cellular-based Machine-to-Machine (M2M) communications is one of the key IoT enabling technologies with huge market potential for cellular service providers deploying Long Term Evolution (LTE) networks. There is an emerging consensus that Fourth Generation (4G) and 5G cellular technologies will enable and support these applications, as they will provide the global mobile connectivity to the anticipated tens of billions of things/devices that will be attached to the Internet. Many vital utilities and service industries are considering the use of commercially available LTE cellular networks to provide critical connections to users, sensors, and smart M2M devices on their networks, due to its low cost and availability. Many of these emerging IoT applications are mission-critical with stringent requirements in terms of reliability and end-to-end (E2E) delay bound. The delay bound specified for each application refers to the device-to-device latencies, which is defined as the combined delay resulting from both application level processing time and communication latency. Each IoT application has its own distinct performance requirements in terms of latency, availability, and reliability. Typically, uplink (UL) traffic of most of these IoT applications is the dominant network traffic (much higher than total downlink (DL) traffic). Thus, efficient LTE UL scheduling algorithms at the base station (“Evolved NodeB (eNB)” per 3GPP standards) are more critical for M2M applications. LTE, however, was not originally intended for IoT applications, where traffic generated by M2M devices (running IoT applications) has totally different characteristics than those from traditional Human-to-Human (H2H)-based voice/video and data communications. In addition, due to the anticipated massive deployment of M2M devices and the limited available radio spectrum, the problem of efficient radio resources management (RRM) and UL scheduling poses a serious challenge in adopting LTE for M2M communications. Existing LTE quality of service (QoS) standard and UL scheduling algorithms were mainly optimized for H2H services and can’t accommodate such a wide range of diverging performance requirements of these M2M-based IoT applications. Though 4G LTE networks can support very low Packet Loss Ratio (PLR) at the physical layer, such reliability, however, comes at the expense of increased latency from tens to hundreds of ms due to the aggressive use of retransmission mechanisms. Current 4G LTE technologies may satisfy a single performance metric of these mission critical applications, but not the simultaneous support of ultra-high reliability and low latency as well as high data rates. Numerous QoS aware LTE UL scheduling algorithms for supporting M2M applications as well as H2H services have been reported in the literature. Most of these algorithms, however, were not intended for the support of mission critical IoT applications, as they are not latency-aware. In addition, these algorithms are simplified and don’t fully conform to LTE’s signaling and QoS standards. For instance, a common practice is the assumption that the time domain UL scheduler located at the eNB prioritizes user equipment (UEs)/M2M devices connection requests based on the head-of-line (HOL) packet waiting time at the UE/device transmission buffer. However, as will be detailed below, LTE standard does not support a mechanism that enables the UEs/devices to inform the eNB uplink scheduler about the waiting time of uplink packets residing in their transmission buffers. Ultra-Reliable Low-Latency Communication (URLLC) paradigm has recently emerged to enable a new range of mission-critical applications and services including industrial automation, real-time operation and control of the smart grid, inter-vehicular communications for improved safety and self-deriving vehicles. URLLC is one of the most innovative 5G New Radio (NR) features. URLLC and its supporting 5G NR technologies might become a commercial reality in the future, but it may be rather a distant future. Thus, deploying viable mission critical IoT applications will have to be postponed until URLLC and 5G NR technologies are commercially feasible. Because IoT applications, specifically mission critical, will have a significant impact on the welfare of all humanity, the immediate or near-term deployments of these applications is of utmost importance. It is the purpose of this thesis to explore whether current commercial 4G LTE cellular networks have the potential to support some of the emerging mission critical IoT applications. Smart grid is selected in this work as an illustrative IoT example because it is one of the most demanding IoT applications, as it includes diverse use cases ranging from mission-critical applications that have stringent requirements in terms of E2E latency and reliability to those that require support of massive number of connected M2M devices with relaxed latency and reliability requirements. The purpose of thesis is two fold: First, a user-friendly MATLAB-based open source software package to model commercial 4G LTE systems is developed. In contrast to mainstream commercial LTE software packages, the developed package is specifically tailored to accurately model mission critical IoT applications and above all fully conforms to commercial 4G LTE signaling and QoS standards. Second, utilizing the developed software package, we present a detailed realistic LTE UL performance analysis to assess the feasibility of commercial 4G LTE cellular networks when used to support such a diverse set of emerging IoT applications as well as typical H2H services

    Efficient resource allocation algorithm for dense femtocell networks

    Get PDF
    La couverture d'intérieur pauvre et la basse capacité d'utilisateur représentent deux défis importants pour les opérateurs cellulaires. Plusieurs solutions (telles que les antennes distribuées) ont été proposées pour résoudre ces problèmes. Cependant, aucune de ces solutions ne fournit le niveau désiré de l'évolutivité et elles manquent l'aspect pratique. Pour ces raisons, une solution attrayante caractérisée par sa faible puissance et son prix faible connue sous le nom de femto-cellule a été introduite pour offrir une meilleure capacité et couverture d'utilisateur. Malgré tous les avantages provoqués par l'intégration de cette nouvelle technologie femto-cellule, plusieurs nouveaux défis ont émergé. Ces défis sont principalement présentés dans deux genres d'interférences ; connu comme interférence cross-tier et interférence co-tier. Tandis que l'impact d'interférence cross-tier (provoqué en partageant le spectre de fréquence) peut être réduit en mettant en application des algorithmes efficaces de réutilisation de fréquence, l'interférence co-tier continue à présenter un défi difficile pour les opérateurs et les chercheurs dans le domaine de réseaux cellulaires. Le déploiement non planifié et mal organisé des stations de base femto-cellule a comme conséquence une réduction radicale de la capacité d'utilisateur qui peut mener à une déconnexion des utilisateurs. L'impact de l'interférence co-tier devient plus provocant dans un déploiement dense des femto-cellule où les utilisateurs demandent des services en temps réel (par exemple, taux de données constant). Afin de réduire l'interférence co-tier, plusieurs solutions ont été proposées dans la littérature comprenant des algorithmes de contrôle de puissance, des techniques de détection avancées et des schémas d'allocation de ressources intelligentes. Dans ce projet, nous proposons une stratégie intelligente d'attribution des fréquences avec une stratégie avancée d'association de station de base femto-cellule pour les réseaux femto-cellule basés sur LTE. L'objectif des deux stratégies proposées est d'atténuer l'interférence co-tier et de réduire la probabilité de panne des utilisateurs en augmentant le nombre d'utilisateurs actifs par station de base femto-cellule. Nous montrons par simulations l'efficacité de notre solution proposée.\ud ______________________________________________________________________________ \ud MOTS-CLÉS DE L’AUTEUR : femtocell base station, interference management, resource block assignment, base station assignment, outage probability
    corecore