223 research outputs found

    TCP Flow Level Performance Evaluation on Error Rate Aware Scheduling Algorithms in Evolved UTRA and UTRAN Networks

    Get PDF
    We present a TCP flow level performance evaluation on error rate aware scheduling algorithms in Evolved UTRA and UTRAN networks. With the introduction of the error rate, which is the probability of transmission failure under a given wireless condition and the instantaneous transmission rate, the transmission efficiency can be improved without sacrificing the balance between system performance and user fairness. The performance comparison with and without error rate awareness is carried out dependant on various TCP traffic models, user channel conditions, schedulers with different fairness constraints, and automatic repeat request (ARQ) types. The results indicate that error rate awareness can make the resource allocation more reasonable and effectively improve the system and individual performance, especially for poor channel condition users

    Optimization and Performance Analysis of High Speed Mobile Access Networks

    Get PDF
    The end-to-end performance evaluation of high speed broadband mobile access networks is the main focus of this work. Novel transport network adaptive flow control and enhanced congestion control algorithms are proposed, implemented, tested and validated using a comprehensive High speed packet Access (HSPA) system simulator. The simulation analysis confirms that the aforementioned algorithms are able to provide reliable and guaranteed services for both network operators and end users cost-effectively. Further, two novel analytical models one for congestion control and the other for the combined flow control and congestion control which are based on Markov chains are designed and developed to perform the aforementioned analysis efficiently compared to time consuming detailed system simulations. In addition, the effects of the Long Term Evolution (LTE) transport network (S1and X2 interfaces) on the end user performance are investigated and analysed by introducing a novel comprehensive MAC scheduling scheme and a novel transport service differentiation model

    Cross-layer scheduling and resource allocation for heterogeneous traffic in 3G LTE

    Get PDF
    3G long term evolution (LTE) introduces stringent needs in order to provide different kinds of traffic with Quality of Service (QoS) characteristics. The major problem with this nature of LTE is that it does not have any paradigm scheduling algorithm that will ideally control the assignment of resources which in turn will improve the user satisfaction. This has become an open subject and different scheduling algorithms have been proposed which are quite challenging and complex. To address this issue, in this paper, we investigate how our proposed algorithm improves the user satisfaction for heterogeneous traffic, that is, best-effort traffic such as file transfer protocol (FTP) and real-time traffic such as voice over internet protocol (VoIP). Our proposed algorithm is formulated using the cross-layer technique. The goal of our proposed algorithm is to maximize the expected total user satisfaction (total-utility) under different constraints. We compared our proposed algorithm with proportional fair (PF), exponential proportional fair (EXP-PF), and U-delay. Using simulations, our proposed algorithm improved the performance of real-time traffic based on throughput, VoIP delay, and VoIP packet loss ratio metrics while PF improved the performance of best-effort traffic based on FTP traffic received, FTP packet loss ratio, and FTP throughput metrics

    A Review of MAC Scheduling Algorithms in LTE System

    Get PDF
    The recent wireless communication networks rely on the new technology named Long Term Evolution (LTE) to offer high data rate real-time (RT) traffic with better Quality of Service (QoS) for the increasing demand of customer requirement. LTE provide low latency for real-time services with high throughput, with the help of two-level packet retransmission. Hybrid Automatic Repeat Request (HARQ) retransmission at the Medium Access Control (MAC) layer of LTE networks achieves error-free data transmission. The performance of the LTE networks mainly depends on how effectively this HARQ adopted in the latest communication standard, Universal Mobile Telecommunication System (UMTS). The major challenge in LTE is to balance QoS and fairness among the users. Hence, it is very essential to design a down link scheduling scheme to get the expected service quality to the customers and to utilize the system resources efficiently. This paper provides a comprehensive literature review of LTE MAC layer and six types of QoS/Channel-aware downlink scheduling algorithms designed for this purpose. The contributions of this paper are to identify the gap of knowledge in the downlink scheduling procedure and to point out the future research direction. Based on the comparative study of algorithms taken for the review, this paper is concluded that the EXP Rule scheduler is most suited for LTE networks due to its characteristics of less Packet Loss Ratio (PLR), less Packet Delay (PD), high throughput, fairness and spectral efficiency

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Performance Comparison of Dual Connectivity and Hard Handover for LTE-5G Tight Integration in mmWave Cellular Networks

    Get PDF
    MmWave communications are expected to play a major role in the Fifth generation of mobile networks. They offer a potential multi-gigabit throughput and an ultra-low radio latency, but at the same time suffer from high isotropic pathloss, and a coverage area much smaller than the one of LTE macrocells. In order to address these issues, highly directional beamforming and a very high-density deployment of mmWave base stations were proposed. This Thesis aims to improve the reliability and performance of the 5G network by studying its tight and seamless integration with the current LTE cellular network. In particular, the LTE base stations can provide a coverage layer for 5G mobile terminals, because they operate on microWave frequencies, which are less sensitive to blockage and have a lower pathloss. This document is a copy of the Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorzi. It will propose an LTE-5G tight integration architecture, based on mobile terminals' dual connectivity to LTE and 5G radio access networks, and will evaluate which are the new network procedures that will be needed to support it. Moreover, this new architecture will be implemented in the ns-3 simulator, and a thorough simulation campaign will be conducted in order to evaluate its performance, with respect to the baseline of handover between LTE and 5G.Comment: Master's Thesis carried out by Mr. Michele Polese under the supervision of Dr. Marco Mezzavilla and Prof. Michele Zorz

    Active queue management for LTE uplink in eNodeB

    Get PDF
    Long-Term Evolution (LTE) is an evolved radio access technology of the 3rd generation mobile communication. It provides high peak bit rates and good end-to-end Quality of Service (QoS). Nevertheless, the wireless link is still likely to be the bottleneck of an end-to-end connection. Thus, having a sophisticated method to manage the queues of the mobile terminal is important. For Wideband Code Division Multiple Access (WCDMA), an Active Queue Management (AQM) algorithm managing the buffer based on the queue size was proposed. In LTE, due to its largely varying bit rates, the queue-size-based approaches are not suitable anymore. Thus, earlier studies have proposed a delay-based AQM to provide a better performance in LTE. For LTE uplink, the existing algorithm is supposed to be implemented in the User Equipment (UE). On the other hand, the implementation of an AQM in the UE is not mandatory. Until now, only a quite simple delay-.based queue management method called Packet Data Convergence Protocol (PDCP) discard is standardized by 3GPP. However, this method is not adaptive and cannot thus guarantee a good throughput. The purpose of this thesis is to develop an AQM method for LTE uplink to enhance the performance of TCP traffic. In order to have a better control of the LTE uplink traffic from the network side, the AQM algorithm is proposed to be implemented in the eNodeB. It retains the delay-based approach; to achieve it, a method is developed to estimate the queuing delays of the UE from the eNodeB side. The delay estimation is based on the changes in Buffer Status Reports (BSRs) and the amount of data delivered in the eNodeB. In LTE, BSRs are created and transmitted by the UE to report the queue length waiting for uplink transmission. A number of simulations are done to study the performance of the delay estimation and the resulting AQM algorithm. The new AQM algorithm is also compared with other algorithms, i.e. delay-based AQM implemented in the UE, PDCP discard and drop-from-front. The results show that the delay-based algorithm implemented in the eNodeB performs almost as well as when implemented in the UE. The results also show that the advantaged of delay-based algorithms comparing to the drop-from-front and PDCP discard are evident; they maintain a high throughput and the low end-to-end delay in most of the scenarios

    LTE Optimization and Resource Management in Wireless Heterogeneous Networks

    Get PDF
    Mobile communication technology is evolving with a great pace. The development of the Long Term Evolution (LTE) mobile system by 3GPP is one of the milestones in this direction. This work highlights a few areas in the LTE radio access network where the proposed innovative mechanisms can substantially improve overall LTE system performance. In order to further extend the capacity of LTE networks, an integration with the non-3GPP networks (e.g., WLAN, WiMAX etc.) is also proposed in this work. Moreover, it is discussed how bandwidth resources should be managed in such heterogeneous networks. The work has purposed a comprehensive system architecture as an overlay of the 3GPP defined SAE architecture, effective resource management mechanisms as well as a Linear Programming based analytical solution for the optimal network resource allocation problem. In addition, alternative computationally efficient heuristic based algorithms have also been designed to achieve near-optimal performance

    Capacity and congestion aware flow control mechanism for efficient traffic aggregation in multi-radio dual connectivity

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Multi-Radio Dual Connectivity (MR-DC) is a key 3GPP technology that enables traffic aggregation between two base stations (BSs), and thus, increasing the per-user data rate. However, the schemes for traffic aggregation management of such technology are left up to vendor implementation. In this paper we show the importance of using an efficient traffic aggregation method to increase the throughput performance of both TCP and UDP-based applications in MR-DC operation. Targeting the gap on the state-of-the-art on this topic, we propose a cross-layer low control mechanism, which efficiently aggregates traffic based on the instantaneous available radio resources and buffering delay of both BSs. The aggregation is performed independently of the MR-DC architecture option, MAC scheduler logic, and transport layer protocol in use. By means of exhaustive testbed experiments, we show that the proposed method exceeds the performance of a benchmark and state-of-the-art low control solutions and achieves at least the 85% and 95% of the theoretical aggregate throughput for TCP and UDP traffic expected from the use of MR-DC, respectively.Peer ReviewedPostprint (published version

    Future Mobile Communications: LTE Optimization and Mobile Network Virtualization

    Get PDF
    Providing QoS while optimizing the LTE network in a cost efficient manner is very challenging. Thus, radio scheduling is one of the most important functions in mobile broadband networks. The design of a mobile network radio scheduler holds several objectives that need to be satisfied, for example: the scheduler needs to maximize the radio performance by efficiently distributing the limited radio resources, since the operator's revenue depends on it. In addition, the scheduler has to guarantee the user's demands in terms of their Quality of Service (QoS). Thus, the design of an effective scheduler is rather a complex task. In this thesis, the author proposes the design of a radio scheduler that is optimized towards QoS guarantees and system performance optimization. The proposed scheduler is called Optimized Service Aware Scheduler (OSA). The OSA scheduler is tested and analyzed in several scenarios, and is compared against other well-known schedulers. A novel wireless network virtualization framework is also proposed in this thesis. The framework targets the concepts of wireless virtualization applied within the 3GPP Long Term Evolution (LTE) system. LTE represents one of the new mobile communication systems that is just entering the market. Therefore, LTE was chosen as a case study to demonstrate the proposed wireless virtualization framework. The framework is implemented in the LTE network simulator and analyzed, highlighting the many advantages and potential gain that the virtualization process can achieve. Two potential gain scenarios that can result from using network virtualization in LTE systems are analyzed: Multiplexing gain coming from spectrum sharing, and multi-user diversity gain. Several LTE radio analytical models, based on Continuous Time Markov Chains (CTMC) are designed and developed in this thesis. These models target the modeling of three different time domain radio schedulers: Maximum Throughput (MaxT), Blind Equal Throughput (BET), and Optimized Service Aware Scheduler (OSA). The models are used to obtain faster results (i.e., in a very short time period in the order of seconds to minutes), compared to the simulation results that can take considerably longer periods, such as hours or sometimes even days. The model results are also compared against the simulation results, and it is shown that it provides a good match. Thus, it can be used for fast radio dimensioning purposes. Overall, the concepts, investigations, and the analytical models presented in this thesis can help mobile network operators to optimize their radio network and provide the necessary means to support services QoS differentiations and guarantees. In addition, the network virtualization concepts provides an excellent tool that can enable the operators to share their resources and reduce their cost, as well as provides good chances for smaller operators to enter the market
    • …
    corecore