32 research outputs found

    Congestion mitigation in LTE base stations using radio resource allocation techniques with TCP end to end transport

    Get PDF
    As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory. As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory

    Enabling Technologies for Ultra-Reliable and Low Latency Communications: From PHY and MAC Layer Perspectives

    Full text link
    © 1998-2012 IEEE. Future 5th generation networks are expected to enable three key services-enhanced mobile broadband, massive machine type communications and ultra-reliable and low latency communications (URLLC). As per the 3rd generation partnership project URLLC requirements, it is expected that the reliability of one transmission of a 32 byte packet will be at least 99.999% and the latency will be at most 1 ms. This unprecedented level of reliability and latency will yield various new applications, such as smart grids, industrial automation and intelligent transport systems. In this survey we present potential future URLLC applications, and summarize the corresponding reliability and latency requirements. We provide a comprehensive discussion on physical (PHY) and medium access control (MAC) layer techniques that enable URLLC, addressing both licensed and unlicensed bands. This paper evaluates the relevant PHY and MAC techniques for their ability to improve the reliability and reduce the latency. We identify that enabling long-term evolution to coexist in the unlicensed spectrum is also a potential enabler of URLLC in the unlicensed band, and provide numerical evaluations. Lastly, this paper discusses the potential future research directions and challenges in achieving the URLLC requirements

    Performance Evaluation of LTE and LTE advanced standards for next generation mobile networks

    Get PDF
    Nel corso della trattazione sono analizzati gli standard 3GPP LTE e LTE-Advanced per la prossima generazione delle reti mobili cellulari. L'algoritmo OptiMOS, che può essere impiegato dalla Stazione Base per servire in modo efficiente connessioni VoIP, è descritto nel capitolo [8]. L’algoritmo di link scheduling Relay, finalizzato a ottimizzare le reti LTE avanzate in presenza di nodi relay è descritto nel capitolo [9]. Questo lavoro è stato presentato in adempimento parziale dei requisiti per la Laurea di Dottore di Ricerca in Ingegneria dell'Informazione presso l'ufficio informazioni Dipartimento di Ingegneria dell'Università degli Studi di Pisa, Italia

    URLLC for 5G and Beyond: Requirements, Enabling Incumbent Technologies and Network Intelligence

    Get PDF
    The tactile internet (TI) is believed to be the prospective advancement of the internet of things (IoT), comprising human-to-machine and machine-to-machine communication. TI focuses on enabling real-time interactive techniques with a portfolio of engineering, social, and commercial use cases. For this purpose, the prospective 5{th} generation (5G) technology focuses on achieving ultra-reliable low latency communication (URLLC) services. TI applications require an extraordinary degree of reliability and latency. The 3{rd} generation partnership project (3GPP) defines that URLLC is expected to provide 99.99% reliability of a single transmission of 32 bytes packet with a latency of less than one millisecond. 3GPP proposes to include an adjustable orthogonal frequency division multiplexing (OFDM) technique, called 5G new radio (5G NR), as a new radio access technology (RAT). Whereas, with the emergence of a novel physical layer RAT, the need for the design for prospective next-generation technologies arises, especially with the focus of network intelligence. In such situations, machine learning (ML) techniques are expected to be essential to assist in designing intelligent network resource allocation protocols for 5G NR URLLC requirements. Therefore, in this survey, we present a possibility to use the federated reinforcement learning (FRL) technique, which is one of the ML techniques, for 5G NR URLLC requirements and summarizes the corresponding achievements for URLLC. We provide a comprehensive discussion of MAC layer channel access mechanisms that enable URLLC in 5G NR for TI. Besides, we identify seven very critical future use cases of FRL as potential enablers for URLLC in 5G NR

    Towards reliable communication in LTE-A connected heterogeneous machine to machine network

    Get PDF
    Machine to machine (M2M) communication is an emerging technology that enables heterogeneous devices to communicate with each other without human intervention and thus forming so-called Internet of Things (IoTs). Wireless cellular networks (WCNs) play a significant role in the successful deployment of M2M communication. Specially the ongoing massive deployment of long term evolution advanced (LTE-A) makes it possible to establish machine type communication (MTC) in most urban and remote areas, and by using LTE-A backhaul network, a seamless network communication is being established between MTC-devices and-applications. However, the extensive network coverage does not ensure a successful implementation of M2M communication in the LTE-A, and therefore there are still some challenges. Energy efficient reliable transmission is perhaps the most compelling demand for various M2M applications. Among the factors affecting reliability of M2M communication are the high endto-end delay and high bit error rate. The objective of the thesis is to provide reliable M2M communication in LTE-A network. In this aim, to alleviate the signalling congestion on air interface and efficient data aggregation we consider a cluster based architecture where the MTC devices are grouped into number of clusters and traffics are forwarded through some special nodes called cluster heads (CHs) to the base station (BS) using single or multi-hop transmissions. In many deployment scenarios, some machines are allowed to move and change their location in the deployment area with very low mobility. In practice, the performance of data transmission often degrades with the increase of distance between neighboring CHs. CH needs to be reselected in such cases. However, frequent re-selection of CHs results in counter effect on routing and reconfiguration of resource allocation associated with CH-dependent protocols. In addition, the link quality between a CH-CH and CH-BS are very often affected by various dynamic environmental factors such as heat and humidity, obstacles and RF interferences. Since CH aggregates the traffic from all cluster members, failure of the CH means that the full cluster will fail. Many solutions have been proposed to combat with error prone wireless channel such as automatic repeat request (ARQ) and multipath routing. Though the above mentioned techniques improve the communication reliability but intervene the communication efficiency. In the former scheme, the transmitter retransmits the whole packet even though the part of the packet has been received correctly and in the later one, the receiver may receive the same information from multiple paths; thus both techniques are bandwidth and energy inefficient. In addition, with retransmission, overall end to end delay may exceed the maximum allowable delay budget. Based on the aforementioned observations, we identify CH-to-CH channel is one of the bottlenecks to provide reliable communication in cluster based multihop M2M network and present a full solution to support fountain coded cooperative communications. Our solution covers many aspects from relay selection to cooperative formation to meet the user’s QoS requirements. In the first part of the thesis, we first design a rateless-coded-incremental-relay selection (RCIRS) algorithm based on greedy techniques to guarantee the required data rate with a minimum cost. After that, we develop fountain coded cooperative communication protocols to facilitate the data transmission between two neighbor CHs. In the second part, we propose joint network and fountain coding schemes for reliable communication. Through coupling channel coding and network coding simultaneously in the physical layer, joint network and fountain coding schemes efficiently exploit the redundancy of both codes and effectively combat the detrimental effect of fading conditions in wireless channels. In the proposed scheme, after correctly decoding the information from different sources, a relay node applies network and fountain coding on the received signals and then transmits to the destination in a single transmission. Therefore, the proposed schemes exploit the diversity and coding gain to improve the system performance. In the third part, we focus on the reliable uplink transmission between CHs and BS where CHs transmit to BS directly or with the help of the LTE-A relay nodes (RN). We investigate both type-I and type-II enhanced LTE-A networks and propose a set of joint network and fountain coding schemes to enhance the link robustness. Finally, the proposed solutions are evaluated through extensive numerical simulations and the numerical results are presented to provide a comparison with the related works found in the literature

    Packet Scheduling Algorithms in LTE/LTE-A cellular Networks: Multi-agent Q-learning Approach

    Get PDF
    Spectrum utilization is vital for mobile operators. It ensures an efficient use of spectrum bands, especially when obtaining their license is highly expensive. Long Term Evolution (LTE), and LTE-Advanced (LTE-A) spectrum bands license were auctioned by the Federal Communication Commission (FCC) to mobile operators with hundreds of millions of dollars. In the first part of this dissertation, we study, analyze, and compare the QoS performance of QoS-aware/Channel-aware packet scheduling algorithms while using CA over LTE, and LTE-A heterogeneous cellular networks. This included a detailed study of the LTE/LTE-A cellular network and its features, and the modification of an open source LTE simulator in order to perform these QoS performance tests. In the second part of this dissertation, we aim to solve spectrum underutilization by proposing, implementing, and testing two novel multi-agent Q-learning-based packet scheduling algorithms for LTE cellular network. The Collaborative Competitive scheduling algorithm, and the Competitive Competitive scheduling algorithm. These algorithms schedule licensed users over the available radio resources and un-licensed users over spectrum holes. In conclusion, our results show that the spectrum band could be utilized by deploying efficient packet scheduling algorithms for licensed users, and can be further utilized by allowing unlicensed users to be scheduled on spectrum holes whenever they occur

    Performance analysis of 4G wireless networks using system level simulator

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaIn the last decade, mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. In particular, a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation (4G). 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigms). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications (i.e. YouTube and Skype) to be available in the near future. Therefore, 4G wireless communications system will be of paramount importance on the development of the information society in the near future. As 4G wireless services will continue to increase, this will put more and more pressure on the spectrum availability. There is a worldwide recognition that methods of spectrum managements have reached their limit and are no longer optimal, therefore new paradigms must be sought. Studies show that most of the assigned spectrum is under-utilized, thus the problem in most cases is inefficient spectrum management rather spectrum shortage. There are currently trends towards a more liberalized approach of spectrum management, which are tightly linked to what is commonly termed as Cognitive Radio (CR). Furthermore, conventional deployment of 4G wireless systems (one BS in cell and mobile deploy around it) are known to have problems in providing fairness (users closer to the BS are more benefited relatively to the cell edge users) and in covering some zones affected by shadowing, therefore the use of relays has been proposed as a solution. To evaluate and analyse the performances of 4G wireless systems software tools are normally used. Software tools have become more and more mature in recent years and their need to provide a high level evaluation of proposed algorithms and protocols is now more important. The system level simulation (SLS) tools provide a fundamental and flexible way to test all the envisioned algorithms and protocols under realistic conditions, without the need to deal with the problems of live networks or reduced scope prototypes. Furthermore, the tools allow network designers a rapid collection of a wide range of performance metrics that are useful for the analysis and optimization of different algorithms. This dissertation proposes the design and implementation of conventional system level simulator (SLS), which afterwards enhances for the 4G wireless technologies namely cognitive Radios (IEEE802.22) and Relays (IEEE802.16j). SLS is then used for the analysis of proposed algorithms and protocols.FC

    Terminal LTE flexível

    Get PDF
    Mstrado em Engenharia Eletrónica e TelecomunicaçõesAs redes móveis estão em constante evolução. A geração atual (4G) de redes celulares de banda larga e representada pelo standard Long Term Evolution (LTE), definido pela 3rd Generation Partnership Project (3GPP). Existe uma elevada procura/uso da rede LTE, com um aumento exponencial do número de dispositivos móveis a requerer uma ligação à Internet de alto débito. Isto pode conduzir à sobrelotação do espetro, levando a que o sinal tenha que ser reforçado e a cobertura melhorada em locais específicos, tal como em grandes conferências, festivais e eventos desportivos. Por outro lado, seria uma vantagem importante se os utilizadores pudessem continuar a usar os seus equipamentos e terminais em situações onde o acesso a redes 4G é inexistente, tais como a bordo de um navio, eventos esporádicos em localizações remotas ou em cenários de catástrofe, em que as infraestruturas que permitem as telecomunicações foram danificadas e a cobertura temporária de rede pode ser decisiva em processos de salvamento. Assim sendo, existe uma motivação clara por trás do desenvolvimento de uma infraestrutura celular totalmente reconfigurável e que preencha as características mencionadas anteriormente. Uma possível abordagem consiste numa plataforma de rádio definido por software (SDR), de código aberto, que implementa o standard LTE e corre em processadores de uso geral (GPPs), tornando possível construir uma rede completa investindo somente em hardware - computadores e front-ends de radiofrequência (RF). Após comparação e análise de várias plataformas LTE de código aberto foi selecionado o OpenAirInterface (OAI) da EURECOM, que disponibiliza uma implementação compatível com a Release 8.6 da 3GPP (com parte das funcionalidades da Release 10). O principal objectivo desta dissertação é a implementação de um User Equipment (UE) flexível, usando plataformas SDR de código aberto que corram num computador de placa única (SBC) compacto e de baixa potência, integrado com um front-end de RF - Universal Software Radio Peripheral (USRP). A transmissão de dados em tempo real usando os modos de duplexagem Time Division Duplex (TDD) e Frequency Division Duplex (FDD) é suportada e a reconfiguração de certos parâmetros é permitida, nomeadamente a frequência portadora, a largura de banda e o número de Resource Blocks (RBs) usados. Além disso, é possível partilhar os dados móveis LTE com utilizadores que estejam próximos, semelhante ao que acontece com um hotspot de Wi-Fi. O processo de implementação é descrito, incluindo todos os passos necessários para o seu desenvolvimento, englobando o port do UE de um computador para um SBC. Finalmente, a performance da rede é analisada, discutindo os valores de débitos obtidos.Mobile networks are constantly evolving. 4G is the current generation of broadband cellular network technology and is represented by the Long Term Evolution (LTE) standard, de ned by 3rd Generation Partnership Project (3GPP). There's a high demand for LTE at the moment, with the number of mobile devices requiring an high-speed Internet connection increasing exponentially. This may overcrowd the spectrum on the existing deployments and the signal needs to be reinforced and coverage improved in speci c sites, such as large conferences, festivals and sport events. On the other hand, it would be an important advantage if users could continue to use their equipment and terminals in situations where cellular networks aren't usually available, such as on board of a cruise ship, sporadic events in remote locations, or in catastrophe scenarios in which the telecommunication infrastructure was damaged and the rapid deployment of a temporary network can save lives. In all of these situations, the availability of exible and easily deployable cellular base stations and user terminals operating on standard or custom bands would be very desirable. Thus, there is a clear motivation for the development of a fully recon gurable cellular infrastructure solution that ful lls these requirements. A possible approach is an open-source, low-cost and low maintenance Software-De ned Radio (SDR) software platform that implements the LTE standard and runs on General Purpose Processors (GPPs), making it possible to build an entire network while only spending money on the hardware itself - computers and Radio-Frequency (RF) front-ends. After comparison and analysis of several open-source LTE SDR platforms, the EURECOM's OpenAirInterface (OAI) was chosen, providing a 3GPP standard-compliant implementation of Release 8.6 (with a subset of Release 10 functionalities). The main goal of this dissertation is the implementation of a exible opensource LTE User Equipment (UE) software radio platform on a compact and low-power Single Board Computer (SBC) device, integrated with an RF hardware front-end - Universal Software Radio Peripheral (USRP). It supports real-time Time Division Duplex (TDD) and Frequency Division Duplex (FDD) LTE modes and the recon guration of several parameters, namely the carrier frequency, bandwidth and the number of LTE Resource Blocks (RB) used. It can also share its LTE mobile data with nearby users, similarly to a Wi-Fi hotspot. The implementation is described through its several developing steps, including the porting of the UE from a regular computer to a SBC. The performance of the network is then analysed based on measured results of throughput

    JTIT

    Get PDF
    kwartalni

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
    corecore