1,343 research outputs found

    Performance Analysis of PCFICH and PDCCH LTE Control Channels

    Get PDF
    Control channels play a key role in the evaluation of mobile system performance. The purpose of our paper is to evaluate the performance of the control channels implementation in the Long Term Evolution (LTE) system. The paper deals with the simulation of the complete signal processing chain for Physical Control Format Indicator Channel (PCFICH) and Physical Downlink Control Channel (PDCCH) in the LTE system, Release 8. We implemented a complete signal processing chain for downlink control channels as an extension of the existing MATLAB LTE downlink simulator. The paper presents results of PCFICH and PDCCH control channel computer performance analysis in various channel conditions. The results can be compared with the performance of data channels

    Scheduling Policies in Time and Frequency Domains for LTE Downlink Channel: A Performance Comparison

    Get PDF
    A key feature of the Long-Term Evolution (LTE) system is that the packet scheduler can make use of the channel quality information (CQI), which is periodically reported by user equipment either in an aggregate form for the whole downlink channel or distinguished for each available subchannel. This mechanism allows for wide discretion in resource allocation, thus promoting the flourishing of several scheduling algorithms, with different purposes. It is therefore of great interest to compare the performance of such algorithms under different scenarios. Here, we carry out a thorough performance analysis of different scheduling algorithms for saturated User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) traffic sources, as well as consider both the time- and frequency-domain versions of the schedulers and for both flat and frequency-selective channels. The analysis makes it possible to appreciate the difference among the scheduling algorithms and to assess the performance gain, in terms of cell capacity, users' fairness, and packet service time, obtained by exploiting the richer, but heavier, information carried by subchannel CQI. An important part of this analysis is a throughput guarantee scheduler, which we propose in this paper. The analysis reveals that the proposed scheduler provides a good tradeoff between cell capacity and fairness both for TCP and UDP traffic sources

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Cross-layer scheduling and resource allocation for heterogeneous traffic in 3G LTE

    Get PDF
    3G long term evolution (LTE) introduces stringent needs in order to provide different kinds of traffic with Quality of Service (QoS) characteristics. The major problem with this nature of LTE is that it does not have any paradigm scheduling algorithm that will ideally control the assignment of resources which in turn will improve the user satisfaction. This has become an open subject and different scheduling algorithms have been proposed which are quite challenging and complex. To address this issue, in this paper, we investigate how our proposed algorithm improves the user satisfaction for heterogeneous traffic, that is, best-effort traffic such as file transfer protocol (FTP) and real-time traffic such as voice over internet protocol (VoIP). Our proposed algorithm is formulated using the cross-layer technique. The goal of our proposed algorithm is to maximize the expected total user satisfaction (total-utility) under different constraints. We compared our proposed algorithm with proportional fair (PF), exponential proportional fair (EXP-PF), and U-delay. Using simulations, our proposed algorithm improved the performance of real-time traffic based on throughput, VoIP delay, and VoIP packet loss ratio metrics while PF improved the performance of best-effort traffic based on FTP traffic received, FTP packet loss ratio, and FTP throughput metrics

    Optimized Performance Evaluation of LTE Hard Handover Algorithm with Average RSRP Constraint

    Full text link
    Hard handover mechanism is adopted to be used in 3GPP Long Term Evolution (3GPP LTE) in order to reduce the complexity of the LTE network architecture. This mechanism comes with degradation in system throughput as well as a higher system delay. This paper proposes a new handover algorithm known as LTE Hard Handover Algorithm with Average Received Signal Reference Power (RSRP) Constraint (LHHAARC) in order to minimize number of handovers and the system delay as well as maximize the system throughput. An optimized system performance of the LHHAARC is evaluated and compared with three well-known handover algorithms via computer simulation. The simulation results show that the LHHAARC outperforms three well-known handover algorithms by having less number of average handovers per UE per second, shorter total system delay whilst maintaining a higher total system throughput.Comment: 16 pages, 9 figures, International Journal of Wireless & Mobile Networks (IJWMN

    Multi-threaded Simulation of 4G Cellular Systems within the LTE-Sim Framework

    Get PDF
    Nowadays, an always increasing number of researchers and industries are putting a large effort in the design and the implementation of protocols, algorithms, and network architectures targeted at the the emerging 4G cellular technology. In this context, multi-core/multi-processor simulation tools can accelerate their activities by drastically reducing the time required to simulate complex scenarios. Unfortunately, today's available tools are mostly single-threaded and they cannot exploit the performance gain offered by parallel programming approaches. To bridge this gap, we have significantly upgraded the LTE-Sim framework by implementing a concurrent scheduling algorithm, namely the Multi-Master Scheduler, aimed at efficiently handling events in a parallel manner, while guaranteeing the correct execution of the simulation itself. Experimental results will demonstrate the effectiveness of our proposal and the performance gain that can be achieved with respect to other classical event scheduling algorithms
    corecore