23 research outputs found

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Estudio y optimización de los procedimientos de adaptación al enlace en HSDPA

    Full text link
    [ES] La tecnología HSDPA (High Speed Downlink Packet Access) es una evolución de UMTS creada con el objetivo de aumentar la capacidad de transmisión en el enlace descendente. Su mejora se basa en la utilización de un canal compartido de comunicación gestionado de forma eficiente desde la estación base (por medio de un packet scheduler), la utilización de mecanismos de retransmisión y combinación de información avanzados (hybrid ARQ) y la posibilidad de emplear modulaciones de alto orden (16QAM y 64QAM). Las dos últimas características nombradas serían inútiles sin unos buenos procedimientos de adaptación al enlace (link adaptation) que ajustaran los parámetros de transmisión a la calidad del enlace radio. La presente tesina aborda el estudio y optimización de los mecanismos de link adaptation en HSDPA. Para tratar el problema se siguen dos estrategias. Por un lado, se estudia un link adaptation genérico con el fin de obtener conclusiones fácilmente trasladables a sistemas particulares como HSDPA. Por otro lado, se aportan soluciones a problemas específicos de HSDPA como los fallos del link adaptation con baja carga.[EN] HSDPA (High Speed Downlink Packet Access) technology is an evolved version of UMTS focused on the improvement of the downlink capacity. HSDPA enhancement is based on the efficient management of a shared channel done by the Node-B (employing a packet scheduler), the using of advanced retransmission and combination mechanisms (hybrid ARQ) and the availability of high order modulations (16QAM and 64QAM). The later characteristics would be worthless without good link adaptation procedures that adjust transmission parameters according to the radiolink quality. This thesis deals with the study and optimization of link adaptation mechanisms in HSDPA. Two strategies are followed herein. First, a generic link adaptation is studied with the aim of reaching some general conclusions and applying them to real systems as HSDPA. Besides, a more detailed study is done for HSDPA finding solutions for some specific problems as link adaptation failures with low loadMartín-Sacristán Gandía, D. (2007). Estudio y optimización de los procedimientos de adaptación al enlace en HSDPA. http://hdl.handle.net/10251/12494Archivo delegad

    Packet scheduling in satellite HSDPA networks.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2010.The continuous growth in wireless networks is not showing any sign of slowing down as new services, new technologies and new mobile users continue to emerge. Satellite networks are expected to complement the terrestrial network and be a valid option to provide broadband communications services to both fixed and mobile users in scenarios where terrestrial networks cannot be used due to technical and economical viability. In the current emerging satellite networks, where different users with varying traffic demands ranging from multimedia, voice to data and with limited capacity, Radio Resource Management (RRM) is considered as one of the most significant and challenging aspect needed to provide acceptable quality of service that will meet the requirements of the different mobile users. This dissertation considers Packet Scheduling in the Satellite High Speed Downlink Packet Access (S-HSDPA) network. The main focus of this dissertation is to propose a new cross-layer designed packet scheduling scheme, which is one of the functions of RRM, called Queue Aware Channel Based (QACB) Scheduler. The proposed scheduler, which, attempts to sustain the quality of service requirements of different traffic requests, improves the system performance compared to the existing schedulers. The performance analysis comparison of the throughput, delay and fairness is determined through simulations. These metrics have been chosen they are three major performance indices used in wireless communications. Due to long propagation delay in HSDPA via GEO satellite, there is misalignment between the instantaneous channel condition of the mobile user and the one reported to the base station (Node B) in S-HSDPA. This affects effectiveness of the channel based packet schedulers and leads to either under utilization of resource or loss of packets. Hence, this dissertation investigates the effect of the introduction of a Signal-to-Noise (SNR) Margin which is used to mitigate the effect of the long propagation delay on performance of S-HSDPA, and the appropriate SNR margin to be used to achieve the best performance is determined. This is determined using both a semi-analytical and a simulation approach. The results show that the SNR margin of 1.5 dB produces the best performance. Finally, the dissertation investigates the effect of the different Radio Link Control (RLC) Transmission modes which are Acknowledged Mode (AM) and Unacknowledged Mode (UM) as it affects different traffic types and schedulers in S-HSDPA. Proportional fair (PF) scheduler and our proposed, QACB, scheduler have been considered as the schedulers for this investigation. The results show that traffic types are sensitive to the transmitting RLC modes and that the QACB scheduler provides better performance compared to PF scheduler in the two RLC modes considered

    Analysis of Factors Affecting the Use of the 64QAM Modulation on the Long-Term Evolution Network by Using Random Forest Method

    Get PDF
    Nowadays internet traffic using cellular telecommunication network is increasing very rapidly. Good LTE (Long-Term Evolution) cellular network performance is very important for any telecommunication operator to maintain customer satisfaction. Poor network performance can also cause customers to switch to other operators. One of the indicator variables in observing the radio quality of the LTE cellular network is Penetration using 64QAM Modulation. 64QAM modulation can transmit higher bitrates with lower power usage. 64QAM modulation will be used if the Channel Quality Index (CQI) condition is very good. Network quality improvement can be done by adding new BTS or optimizing existing BTS. The addition of new BTS will increase coverage, quality, and capacity but cost is high, and the time required to build BTS is also long, while improving network quality by optimizing BTS can be done by purchasing LTE features and costs incurred still relatively low. In increasing the penetration of using 64QAM modulation, it is necessary to analyze the other variables. The traditional method to improve this Key Performance Indicator (KPI) requires an expert and professional but is often inaccurate and spends a lot of time finding the factors that cause it. To solve this problem, Random forest method is proposed. By knowing the variables that have a significant effect on network quality, the capital costs incurred by cellular operators for improving network quality will be more effective and efficient because the capital costs invested only focus on influencing variables such as purchasing LTE network features only done for those related to these variables. The results of this study, we make CQI improvement flow based on the classification of the random forest method that produces feature/variable importance

    Outage-based ergodic link adaptation for fading channels with delayed CSIT

    Full text link
    Link adaptation in which the transmission data rate is dynamically adjusted according to channel variation is often used to deal with time-varying nature of wireless channel. When channel state information at the transmitter (CSIT) is delayed by more than channel coherence time due to feedback delay, however, the effect of link adaptation can possibly be taken away if this delay is not taken into account. One way to deal with such delay is to predict current channel quality given available observation, but this would inevitably result in prediction error. In this paper, an algorithm with different view point is proposed. By using conditional cdf of current channel given observation, outage probability can be computed for each value of transmission rate RR. By assuming that the transmission block error rate (BLER) is dominated by outage probability, the expected throughput can also be computed, and RR can be determined to maximize it. The proposed scheme is designed to be optimal if channel has ergodicity, and it is shown to considerably outperform conventional schemes in certain Rayleigh fading channel model

    Macro Diversity Combining Optimization in HSPA flat architecture

    Get PDF
    This thesis, Macro Diversity Combining Optimization in High Speed Packet Access (HSPA) flat architecture, concentrates on analyzing implementation alternatives of Marco Diversity Combining (MDC) in fiat architecture. When centralized elements, like Radio Network Controller (RNC), are removed from the architecture, centralized functionalities need to be implemented differently. One of the most important centralized functionality is Macro Diversity Combining which collects traffic from multiple base stations and improves radio performance like bit rate and coverage area. When this functionality is implemented inside base station traffic needs to be sent between base stations. Traffic between base stations creates new requirements for transport network and potentially also increases operator transport cost. In short, if MDC is fully implemented, traffic between base stations is maximized and opposite, if MDC is left out, radio performance is reduced. The thesis starts with the overview introduction of Universal Mobile Telecommunication System (UMTS) network. Here we discuss the architecture of the UMTS packets switched network, and the main functionalities of the Radio Resource Management (RRM): power control and handover control. A deeper look is taken into evolution of 3GPP packet access namely High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Data Access (HSUPA) plus the relevant HSDPA cell change and HSUPA handovers are covered. A short glance is also taken into the gains introduced by MDC. In this thesis four proposals presented in 3GPP to improve the MDC with regards to utilization of transport network, implementation complexity, radio performance, latency and amount of additions to existing 3GPP specifications are evaluated. Finally, an implementation alternative for MDC optimization in flat architecture is presented based on the proposals in 3GPP

    On the Feasibility of Utilizing Commercial 4G LTE Systems for Misson-Critical IoT Applications

    Full text link
    Emerging Internet of Things (IoT) applications and services including e-healthcare, intelligent transportation systems, smart grid, and smart homes to smart cities to smart workplace, are poised to become part of every aspect of our daily lives. The IoT will enable billions of sensors, actuators, and smart devices to be interconnected and managed remotely via the Internet. Cellular-based Machine-to-Machine (M2M) communications is one of the key IoT enabling technologies with huge market potential for cellular service providers deploying Long Term Evolution (LTE) networks. There is an emerging consensus that Fourth Generation (4G) and 5G cellular technologies will enable and support these applications, as they will provide the global mobile connectivity to the anticipated tens of billions of things/devices that will be attached to the Internet. Many vital utilities and service industries are considering the use of commercially available LTE cellular networks to provide critical connections to users, sensors, and smart M2M devices on their networks, due to its low cost and availability. Many of these emerging IoT applications are mission-critical with stringent requirements in terms of reliability and end-to-end (E2E) delay bound. The delay bound specified for each application refers to the device-to-device latencies, which is defined as the combined delay resulting from both application level processing time and communication latency. Each IoT application has its own distinct performance requirements in terms of latency, availability, and reliability. Typically, uplink (UL) traffic of most of these IoT applications is the dominant network traffic (much higher than total downlink (DL) traffic). Thus, efficient LTE UL scheduling algorithms at the base station (“Evolved NodeB (eNB)” per 3GPP standards) are more critical for M2M applications. LTE, however, was not originally intended for IoT applications, where traffic generated by M2M devices (running IoT applications) has totally different characteristics than those from traditional Human-to-Human (H2H)-based voice/video and data communications. In addition, due to the anticipated massive deployment of M2M devices and the limited available radio spectrum, the problem of efficient radio resources management (RRM) and UL scheduling poses a serious challenge in adopting LTE for M2M communications. Existing LTE quality of service (QoS) standard and UL scheduling algorithms were mainly optimized for H2H services and can’t accommodate such a wide range of diverging performance requirements of these M2M-based IoT applications. Though 4G LTE networks can support very low Packet Loss Ratio (PLR) at the physical layer, such reliability, however, comes at the expense of increased latency from tens to hundreds of ms due to the aggressive use of retransmission mechanisms. Current 4G LTE technologies may satisfy a single performance metric of these mission critical applications, but not the simultaneous support of ultra-high reliability and low latency as well as high data rates. Numerous QoS aware LTE UL scheduling algorithms for supporting M2M applications as well as H2H services have been reported in the literature. Most of these algorithms, however, were not intended for the support of mission critical IoT applications, as they are not latency-aware. In addition, these algorithms are simplified and don’t fully conform to LTE’s signaling and QoS standards. For instance, a common practice is the assumption that the time domain UL scheduler located at the eNB prioritizes user equipment (UEs)/M2M devices connection requests based on the head-of-line (HOL) packet waiting time at the UE/device transmission buffer. However, as will be detailed below, LTE standard does not support a mechanism that enables the UEs/devices to inform the eNB uplink scheduler about the waiting time of uplink packets residing in their transmission buffers. Ultra-Reliable Low-Latency Communication (URLLC) paradigm has recently emerged to enable a new range of mission-critical applications and services including industrial automation, real-time operation and control of the smart grid, inter-vehicular communications for improved safety and self-deriving vehicles. URLLC is one of the most innovative 5G New Radio (NR) features. URLLC and its supporting 5G NR technologies might become a commercial reality in the future, but it may be rather a distant future. Thus, deploying viable mission critical IoT applications will have to be postponed until URLLC and 5G NR technologies are commercially feasible. Because IoT applications, specifically mission critical, will have a significant impact on the welfare of all humanity, the immediate or near-term deployments of these applications is of utmost importance. It is the purpose of this thesis to explore whether current commercial 4G LTE cellular networks have the potential to support some of the emerging mission critical IoT applications. Smart grid is selected in this work as an illustrative IoT example because it is one of the most demanding IoT applications, as it includes diverse use cases ranging from mission-critical applications that have stringent requirements in terms of E2E latency and reliability to those that require support of massive number of connected M2M devices with relaxed latency and reliability requirements. The purpose of thesis is two fold: First, a user-friendly MATLAB-based open source software package to model commercial 4G LTE systems is developed. In contrast to mainstream commercial LTE software packages, the developed package is specifically tailored to accurately model mission critical IoT applications and above all fully conforms to commercial 4G LTE signaling and QoS standards. Second, utilizing the developed software package, we present a detailed realistic LTE UL performance analysis to assess the feasibility of commercial 4G LTE cellular networks when used to support such a diverse set of emerging IoT applications as well as typical H2H services

    LTE Performance Analysis on 800 and 1800 MHz Bands

    Get PDF
    Long Term Evolution (LTE) is a high speed wireless technology based on OFDM. Unlike its predecessors, its bandwidth can be scaled from 1.6 MHz to 20 MHz. Maximum theoretical throughputs from the LTE network in downlink can be estimated to range over 300 Mbps. The practical values are however limited by the channel overheads, path loss and cell loading. Its ability to deliver high throughput depends upon its radio access technology OFDM and the high bandwidth usage as well. The other important feature of LTE is its usage in multiple bands of spectrum. The primary focus in this thesis is on 800 MHz band and its comparison with 1800 MHz band and UMTS coverage. Coverage, capacity and throughput scenario are the essential dimensions studied in this thesis. A test network was set up for LTE measurement and the primary measurement parameters such as RSRP, RSRQ, SINR and throughput were observed for different measurement cases. The measurement files were analysed from different perspectives to conclude upon the coverage aspect of the network. The basic LTE radio parameters RSRP, RSRQ and SINR tend to degrade as the UE moves towards the cell edge in a pattern similar to the nature of free space loss model. In a way, these parameters are interrelated and eventually prove decisive in the downlink throughput. The throughput follows the trend of other radio parameters and decrease as the UE moves towards the cell edge. The measured values have been compared to the theoretical results defined by link budget and Shannon’s limit. The comparison shows that measured values are confined within the theoretical constraints. Theoretical constraints along with the minimum requirements set by the operator have been used to measure the performance of the sites. Similar analysis was also performed with the LTE 1800 network and UMTS 900 network and the result was compared to the coverage scenario of LTE 800. Higher slope of attenuation was observed with LTE 1800 compared to LTE 800 and thus limiting the coverage area. Comparison of radio parameters RSRP, RSRQ and SINR confirm the coverage difference and its consequence on downlink throughput. Performance of UMTS 900 however was not much different to LTE 800 coverage wise. The measurements have been carried out on LTE test network at Kuusamo, Finland and the commercial UMTS network set up by TeliaSonera for the comparison of LTE 800 with LTE 1800 and UMTS 900
    corecore