25 research outputs found

    Data analytics for mobile traffic in 5G networks using machine learning techniques

    Get PDF
    This thesis collects the research works I pursued as Ph.D. candidate at the Universitat Politecnica de Catalunya (UPC). Most of the work has been accomplished at the Mobile Network Department Centre Tecnologic de Telecomunicacions de Catalunya (CTTC). The main topic of my research is the study of mobile network traffic through the analysis of operative networks dataset using machine learning techniques. Understanding first the actual network deployments is fundamental for next-generation network (5G) for improving the performance and Quality of Service (QoS) of the users. The work starts from the collection of a novel type of dataset, using an over-the-air monitoring tool, that allows to extract the control information from the radio-link channel, without harming the users’ identities. The subsequent analysis comprehends a statistical characterization of the traffic and the derivation of prediction models for the network traffic. A wide group of algorithms are implemented and compared, in order to identify the highest performances. Moreover, the thesis addresses a set of applications in the context mobile networks that are prerogatives in the future mobile networks. This includes the detection of urban anomalies, the user classification based on the demanded network services, the design of a proactive wake-up scheme for efficient-energy devices.Esta tesis recoge los trabajos de investigación que realicé como Ph.D. candidato a la Universitat Politecnica de Catalunya (UPC). La mayor parte del trabajo se ha realizado en el Centro Tecnológico de Telecomunicaciones de Catalunya (CTTC) del Departamento de Redes Móviles. El tema principal de mi investigación es el estudio del tráfico de la red móvil a través del análisis del conjunto de datos de redes operativas utilizando técnicas de aprendizaje automático. Comprender primero las implementaciones de red reales es fundamental para la red de próxima generación (5G) para mejorar el rendimiento y la calidad de servicio (QoS) de los usuarios. El trabajo comienza con la recopilación de un nuevo tipo de conjunto de datos, utilizando una herramienta de monitoreo por aire, que permite extraer la información de control del canal de radioenlace, sin dañar las identidades de los usuarios. El análisis posterior comprende una caracterización estadística del tráfico y la derivación de modelos de predicción para el tráfico de red. Se implementa y compara un amplio grupo de algoritmos para identificar los rendimientos más altos. Además, la tesis aborda un conjunto de aplicaciones en el contexto de redes móviles que son prerrogativas en las redes móviles futuras. Esto incluye la detección de anomalías urbanas, la clasificación de usuarios basada en los servicios de red demandados, el diseño de un esquema de activación proactiva para dispositivos de energía eficiente.Postprint (published version

    Efficient Service for Next Generation Network Slicing Architecture and Mobile Traffic Analysis Using Machine Learning Technique

    Get PDF
    The tremendous growth of mobile devices, IOT devices, applications and many other services have placed high demand on mobile and wireless network infrastructures. Much research and development of 5G mobile networks have found the way to support the huge volume of traffic, extracting of fine-gained analytics and agile management of mobile network elements, so that it can maximize the user experience. It is very challenging to accomplish the tasks as mobile networks increase the complexity, due to increases in the high volume of data penetration, devices, and applications. One of the solutions, advance machine learning techniques, can help to mitigate the large number of data and algorithm driven applications. This work mainly focus on extensive analysis of mobile traffic for improving the performance, key performance indicators and quality of service from the operations perspective. The work includes the collection of datasets and log files using different kind of tools in different network layers and implementing the machine learning techniques to analyze the datasets to predict mobile traffic activity. A wide range of algorithms were implemented to compare the analysis in order to identify the highest performance. Moreover, this thesis also discusses about network slicing architecture its use cases and how to efficiently use network slicing to meet distinct demands

    Resource Allocation in Heterogeneous Networks

    Get PDF

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions

    Monitoring and testing in LTE networks: from experimental analysis to operational optimisation

    Get PDF
    L'avvento di LTE e LTE-Adavanced, e la loro integrazione con le esistenti tecnologie cellulari, GSM e UMTS, ha costretto gli operatori di rete radiomobile ad eseguire una meticolosa campagna di test e a dotarsi del giusto know-how per rilevare potenziali problemi durante il dispiegamento di nuovi servizi. In questo nuovo scenario di rete, la caratterizzazione e il monitoraggio del traffico nonchè la configurazione e l'affidibilità degli apparati di rete, sono di importanza fondamentale al fine di prevenire possibili insidie durante la distribuzione di nuovi servizi e garantire la migliore esperienza utente possibile. Sulla base di queste osservazioni, questa tesi di dottorato offre un percorso completo di studio che va da un'analisi sperimentale ad un'ottimizzazione operativa. Il punto di partenza del nostro lavoro è stato il monitoraggio del traffico di un eNodeB di campo con tre celle, operativo nella banda 1800 MHz. Tramite campagne di misura successive, è stato possibile seguire l'evoluzione della rete 4G dagli albori del suo dispiegamento nel 2012, fino alla sua completa maturazione nel 2015. I dati raccolti durante il primo anno, evidenziavano uno scarso utilizzo della rete LTE, dovuto essenzialmente alla limitata penetrazione dei nuovi smartphone 4G. Nel 2015, invece, abbiamo assistito ad un aumento netto e decisivo del numero di utenti che utilizzano la tecnolgia LTE, con statistiche aggregate (come gli indici di marketshare per i sistemi operativi degli smartphones, o la percentuale di traffico video) che rispecchiano i trend nazionali e internazionali. Questo importante risultato testimonia la maturità della tecnologia LTE, e ci permette di considerare il nostro eNodeB un punto di osservazione prezioso per l'analisi del traffico. Di pari passo con l'evoluzione dell'infrastruttura, anche i telefoni cellulari hanno avuto una sorprendente evoluzione nel corso degli ultimi due decenni, a partire da dispositivi semplici con servizi di sola voce, fino agli smartphone di ultima generazione che offrono servizi innovativi, come Internet mobile, geolocalizzazione e mappe, servizi multimediali, e molti altri. Monitorare il traffico reale ci ha quindi permesso di studiare il comportamento degli utenti e individuare i servizi maggiormente utilizzati. Per questo, sono state sviluppate diverse librerie software per l'analisi del traffico. In particolare, è stato sviluppato in C++14 un framework/tool per la classificazione del traffico. Il progetto, disponibile su github, si chiama MOSEC, un acronimo per MOdular SErvice Classifier. MOSEC consente di definire e utilizzare un numero arbitrario di plug-in, che processano il pacchetto secondo le loro logiche e possono o no ritornare un valore di classificazione. Una strategia di decisione finale consente di classificare i vari flussi, basandosi sulle classificazioni di ciascun plug-in. Abbiamo quindi validato la bontà del processi di classificazione di MOSEC utilizzando una traccia labellata come ground-truth di classificazione. I risultati mostrano una eccellente capacità di classificazione di traffico TCP-HTTP/HTTPS, mediamente superiore a quella di altri tool di classificazione (nDPI, PACE, Layer-7), ed evidenzia alcune lacune per quanto riguarda la classificazione di traffico UDP. Le carattistiche dei flussi di traffico utente (User Plane) hanno un impatto diretto sul consumo energetico dei terminali e indiretto sul traffico di controllo (Control Plane) che viene generato. Pertanto, la conoscenza delle proprietà statistiche dei vari flussi consente di affrontare un problema del cross-layer optimization, per ridurre il consumo energetico dei terminali variando dei parametri configurabili sugli eNodeB. E' noto che la durata della batteria dei nuovi smartphone, rappresenta uno dei maggiori limiti nell'utilizzo degli stessi. In particolare, lo sviluppo di nuovi servizi e applicazioni capaci di lavorare in background, senza la diretta interazione dell’utente, ha introdotto nuovi problemi riguardanti la durata delle batterie degli smartphone e il traffico di segnalazione necessario ad acquisire/rilasciare le risorse radio. In conformità a queste osservazioni, è stato condotto uno studio approfondito sul meccanismo DRX (Discontinuous Reception), usato in LTE per consentire all’utente di risparmiare energia quando nessun pacchetto è inviato o ricevuto. I parametri DRX e RRC Inactivity Timer influenzano notevolmente l’energia consumata dai vari device. A seconda che le risorse radio siano assegnate o meno, l’UE si trova rispettivamente negli stati di RRC Connected e RRC Idle. Per valutare il consumo energetico degli smartphone, è stato sviluppato un algoritmo che associa un valore di potenza a ciascuno degli stati in cui l’UE può trovarsi. La transizione da uno stato all’altro è regolata da diversi timeout che sono resettati ogni volta che un pacchetto è inviato o ricevuto. Utilizzando le tracce di traffico reale, è stata associata una macchina a stati a ogni UE per valutare il consumo energetico sulla base dei pacchetti inviati e ricevuti. Osservando le caratteristiche statistiche del traffico User Plane è stata ripetuta la simulazione utilizzando dei valori dell’Inactivity Timer diversi da quello impiegato negli eNodeB di rete reale, alla ricerca di un buon trade-off tra risparmio energetico e aumento del traffico di segnalazione. I risultati hanno permesso di determinare che l'Inactivity Timer, settato originariamente sull'eNodeB era troppo elevato e determinava un consumo energetico eccesivo sui terminali. Diminuendone il valore fino a 10 secondi, si può ottenere un risparmio energetico fino al 50\% (a secondo del traffico generato) senza aumentare considerevolemente il traffico di controllo. I risultati dello studio di cui sopra, tuttavia, non tengono in considerazione lo stato di stress cui può essere sottoposto un eNodeB per effetto dell'aumento del traffico di segnalazione, nè, tantomeno, dell'aumento della contesa di accesso alla rete durante la procedura di RACH, necessaria per ristabilire il bearer radio (o connessione RRC) tra terminale ed eNodeB. Valutare le performance di sistemi hardware e software per la rete mobile di quarta generazione, cosi come individuare qualsiasi possibile debolezza all’interno dell’architettura, è un lavoro complesso. Un possibile caso di studio, è proprio quello di valutare la robustezza delle Base Station quando riceve molte richieste di connessioni RRC, per effetto di una diminuzione dell'Inactivity Timer. A tal proposito, all’interno del Testing LAB di Telecom Italia, abbiamo utilizzato IxLoad, un prodotto sviluppato da Ixia, come generatore di carico per testare la robustezza di un eNodeB. I test sono consistiti nel produrre un differente carico di richieste RRC sull'interfaccia radio, similmente a quelle che si avrebbero diminuendo l'Inactivity Timer. Le proprietà statistiche del traffico di controllo sono ricavate a partire dall'analisi dalle tracce di traffico reale. I risultati hanno dimostrato che, anche a fronte di un carico sostenuto di richieste RRC solo una minima parte (percentuale inferiore all'1\% nel caso più sfavorevole) di procedure fallisce. Abbassare l'inactivity timer anche a valori inferiori ai 10 secondi non è quindi un problema per la Base Station. Rimane da valutare, infine, cosa succede a seguito dell'aumento delle richieste di accesso al canale RACH, dal punto di vista degli utenti. Quando due o più utenti tentano, simultaneamente, di accedere al canale RACH, utilizzando lo stesso preambolo, l’eNodeB potrebbe non essere in grado di decifrare il preambolo. Se i due segnali interferiscono costruttivamente, entrambi gli utenti riceveranno le stesse risorse per trasmettere il messaggio di RRC Request e, a questo punto, l’eNodeB può individuare la collisione e non trasmetterà nessun acknowledgement, forzando entrambi gli utenti a ricominciare la procedura dall’inizio. Abbiamo quindi proposto un modello analitico per calcolare la probabilità di collisione in funzione del numero di utenti e del carico di traffico offerto, quando i tempi d’interarrivo tra richieste successive é modellata con tempi iper-esponenziali. In più, abbiamo investigato le prestazioni di comunicazioni di tipo Machine-to-Machine (M2M) e Human-to-Human (H2H), valutando, al variare del numero di preamboli utilizzati, la probabilità di collisione su canale RACH, la probabilità di corretta trasmissione considerando sia il tempo di backoff che il numero massimo di ritrasmissioni consentite, e il tempo medio necessario per stabilire un canale radio con la rete di accesso. I risultati, valutati nel loro insieme, hanno consentito di esprimere delle linee guida per ripartire opportunamente il numero di preamboli tra comunicazioni M2M e H2H. The advent of LTE and LTE-Advanced, and their integration with existing cellular technologies, GSM and UMTS, has forced the mobile radio network operators to perform meticulous tests and adopt the right know-how to detect potential new issues, before the activation of new services. In this new network scenario, traffic characterisation and monitoring as well as configuration and on-air reliability of network equipment, is of paramount relevance in order to prevent possible pitfalls during the deployment of new services and ensure the best possible user experience. Based on this observation, this research project offers a comprehensive study that goes from experimental analysis to operational optimization. The starting point of our work has been monitoring the traffic of an already deployed eNodeB with three cells, operative in the 1800 MHz band. Through subsequent measurement campaigns, it was possible to follow the evolution of the 4G network by the beginning of its deployment in 2012, until its full maturity in 2015. The data collected during the first year, showed a poor use of the LTE network, mainly due to the limited penetration of new 4G smartphone. In 2015, however, we appreciate a clear and decisive increase in the number of terminals using LTE, with aggregate statistics (e.g. marketshare for smartphone operating systems, or the percentage of video traffic) that reflect the national trend. This important outcome testifies the maturity of LTE technology, and allows us to consider our monitored eNodeB as a valuable vantage point for traffic analysis. Hand in hand with the evolution of the infrastructure, even mobile phones have had a surprising evolution over the past two decades, from simple devices with only voice services, towards smartphones offering novel services such as mobile Internet, geolocation and maps, multimedia services, and many more. Monitoring the real traffic has allowed us to study the users behavior and identify the services most used. To this aim, various software libraries for traffic analysis have been developed. In particular, we developed a C/C++ library that analyses Control Plane and User Plane traffic, which provides corse and fined-grained statistics at flow-level. Another framework/tool has been exclusively dedicated to the topic of traffic classification. Among the plethora of existing tool for traffic classification we provide our own solution, developed from scratch. The project, which is available on github, is named MOSEC, an acronym for Modular SErvice Classifier. The modularity is given by the possibility to implement multiple plug-ins, each one will process the packet according to its logic, and may or may not return a packet/flow classification. A final decision strategy allows to classify the various streams, based on the classifications of each plug-in. Despite previous approaches, the ability of keeping together multiple classifiers allows to mitigate the deficiency of each classifiers (e.g. DPI\nomenclature{DPI}{Deep Packet Inspection} does not work when packets are encrypted or DNS\nomenclature{DNS}{Domain Name Server} queries don't have to be sent if name resolution is cached in device memory) and exploit their full-capabilities when it is feasible. We validated the goodness of MOSEC using a labelled trace synthetically created by colleagues from UPC BarcelonaTech. The results show excellent TCP-HTTP/HTTPS traffic classification capabilities, higher, on average, than those of other classification tools (NDPI, PACE, Layer-7). On the other hand, there are some shortcomings with regard to the classification of UDP traffic. The characteristics of User Plane traffic have a direct impact on the energy consumed by the handset devices, and an indirect impact on the Control Plane traffic that is generated. Therefore, the acquaintances of the statistical properties of the various flows, allows us to deal with the problem of cross-layer optimization, that is reducing the power consumption of the terminals by varying some control plane parameters configurable on the eNodeB. It is well known that the battery life of the new smartphones is one of the major limitations in the use of the same. In particular, the birth of new services and applications capable of working in the background without direct user interaction, introduced new issues related to the battery lifetime and the signaling traffic necessary to acquire/release the radio resources. Based on these observations, we conducted a thorough study on the DRX mechanism (Discontinuous Reception), exploited by LTE to save smartphones energy when no packet is sent or received. The DRX configuration set and the RRC Inactivity Timer greatly affect the energy consumed by the various devices. Depending on which radio resources are allocated or not, the user equipment is in the states of RRC Connected and Idle, respectively. To evaluate the energy consumption of smartphones, an algorithm simulates the transition between all the possible states in which an UE can be and maps a power value to each of these states. The transition from one state to another is governed by different timeouts that are reset every time a packet is sent or received. Using the traces of real traffic, we associate a state machine to each for assessing the energy consumption on the basis of the sent and received packets. We repeated these simulations using different values of the inactivity timer, that appear to be more suitable than the one currently configured on the monitored eNodeB, looking for a good trade-off between energy savings and increased signaling traffic. The results highlighted that the Inactivity Timer set originally sull'eNodeB was too high and determined an excessive energy consumption on the terminals. Reducing the value up to 10 seconds permits to achieve energy savings of up to 50\% (depending on the underling traffic profile) without up considerably the control traffic. The results of the study mentioned above, however, do not consider neither the stress level which the eNodeB is subject to, given the raise of signaling traffic that could occur, nor the increase of collision probability during the RACH procedure, needed to re-establish the radio bearer (or RRC connection ) between the terminal and eNodeB . Evaluate the performance of hardware and software systems for the fourth-generation mobile network, as well as identify any possible weakness in the architecture, it is a complex job. A possible case study, is precisely to assess the robustness of the base station when it receives many requests for RRC connections, as effect of a decrease of the inactivity timer. In this regard, within the Testing LAB of Telecom Italia, we used IxLoad, a product developed by Ixia, as a load generator to test the robustness of one eNodeB. The tests consisted in producing a different load of RRC request on the radio interface, similar to those that would be produced by decreasing the inactivity timer to certain values. The statistical properties for the signalling traffic are derived from the analysis of real traffic traces. The main outcomes have shown that, even in the face of an high load of RRC requests only a small part (less than 1\% in the most unfavorable of the cases) of the procedure fails. Therefore, even lowering the inactivity timer at values lower than 10 seconds is not an issue for the Base Station. Finally, remains to be evaluated how such surge of RRC request impacts on users performance. If one of the users under coverage in the RRC Idle is paged for an incoming packet or need to send an uplink packet a state transition from RRC Idle to RRC Connected is needed. At this point, the UE initiates the random access procedure by sending the random access channel preamble (RACH Preamble). When two or more users attempt, simultaneously, to access the RACH channel, using the same preamble, the eNodeB may not be able to decipher the preamble. If the two signals interfere constructively, both users receive the same resources for transmitting the RRC Request message and, at this point, the eNodeB can detect the collision and will not send any acknowledgment, forcing both users to restart the procedure from the beginning. We have proposed an analytical model to calculate the probability of a collision based on the number of users and the offered traffic load, when the interarrival time between requests is modeled with hyper-exponential times. In addition, we investigated some performance for Machine-to-Machine (M2M) and Human-to-Human (H2H) type communications, including the probability of correct transmission considering either the backoff time either the maximum number of allowed retransmissions, and the average time required to established a radio bearer with the access network. The results, considered as a whole, have made possible to express the guidelines to properly distribute the number of preambles in H2H and M2M communications

    Sustainable scheduling policies for radio access networks based on LTE technology

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn the LTE access networks, the Radio Resource Management (RRM) is one of the most important modules which is responsible for handling the overall management of radio resources. The packet scheduler is a particular sub-module which assigns the existing radio resources to each user in order to deliver the requested services in the most efficient manner. Data packets are scheduled dynamically at every Transmission Time Interval (TTI), a time window used to take the user’s requests and to respond them accordingly. The scheduling procedure is conducted by using scheduling rules which select different users to be scheduled at each TTI based on some priority metrics. Various scheduling rules exist and they behave differently by balancing the scheduler performance in the direction imposed by one of the following objectives: increasing the system throughput, maintaining the user fairness, respecting the Guaranteed Bit Rate (GBR), Head of Line (HoL) packet delay, packet loss rate and queue stability requirements. Most of the static scheduling rules follow the sequential multi-objective optimization in the sense that when the first targeted objective is satisfied, then other objectives can be prioritized. When the targeted scheduling objective(s) can be satisfied at each TTI, the LTE scheduler is considered to be optimal or feasible. So, the scheduling performance depends on the exploited rule being focused on particular objectives. This study aims to increase the percentage of feasible TTIs for a given downlink transmission by applying a mixture of scheduling rules instead of using one discipline adopted across the entire scheduling session. Two types of optimization problems are proposed in this sense: Dynamic Scheduling Rule based Sequential Multi-Objective Optimization (DSR-SMOO) when the applied scheduling rules address the same objective and Dynamic Scheduling Rule based Concurrent Multi-Objective Optimization (DSR-CMOO) if the pool of rules addresses different scheduling objectives. The best way of solving such complex optimization problems is to adapt and to refine scheduling policies which are able to call different rules at each TTI based on the best matching scheduler conditions (states). The idea is to develop a set of non-linear functions which maps the scheduler state at each TTI in optimal distribution probabilities of selecting the best scheduling rule. Due to the multi-dimensional and continuous characteristics of the scheduler state space, the scheduling functions should be approximated. Moreover, the function approximations are learned through the interaction with the RRM environment. The Reinforcement Learning (RL) algorithms are used in this sense in order to evaluate and to refine the scheduling policies for the considered DSR-SMOO/CMOO optimization problems. The neural networks are used to train the non-linear mapping functions based on the interaction among the intelligent controller, the LTE packet scheduler and the RRM environment. In order to enhance the convergence in the feasible state and to reduce the scheduler state space dimension, meta-heuristic approaches are used for the channel statement aggregation. Simulation results show that the proposed aggregation scheme is able to outperform other heuristic methods. When the aggregation scheme of the channel statements is exploited, the proposed DSR-SMOO/CMOO problems focusing on different objectives which are solved by using various RL approaches are able to: increase the mean percentage of feasible TTIs, minimize the number of TTIs when the RL approaches punish the actions taken TTI-by-TTI, and minimize the variation of the performance indicators when different simulations are launched in parallel. This way, the obtained scheduling policies being focused on the multi-objective criteria are sustainable. Keywords: LTE, packet scheduling, scheduling rules, multi-objective optimization, reinforcement learning, channel, aggregation, scheduling policies, sustainable

    Measurement and Optimization of LTE Performance

    Get PDF
    4G Long Term Evolution (LTE) mobile system is the fourth generation communication system adopted worldwide to provide high-speed data connections and high-quality voice calls. Given the recent deployment by mobile service providers, unlike GSM and UMTS, LTE can be still considered to be in its early stages and therefore many topics still raise great interest among the international scientific research community: network performance assessment, network optimization, selective scheduling, interference management and coexistence with other communication systems in the unlicensed band, methods to evaluate human exposure to electromagnetic radiation are, as a matter of fact, still open issues. In this work techniques adopted to increase LTE radio performances are investigated. One of the most wide-spread solutions proposed by the standard is to implement MIMO techniques and within a few years, to overcome the scarcity of spectrum, LTE network operators will offload data traffic by accessing the unlicensed 5 GHz frequency. Our Research deals with an evaluation of 3GPP standard in a real test best scenario to evaluate network behavior and performance

    Resource management in future mobile networks: from millimetre-wave backhauls to airborne access networks

    Get PDF
    The next generation of mobile networks will connect vast numbers of devices and support services with diverse requirements. Enabling technologies such as millimetre-wave (mm-wave) backhauling and network slicing allow for increased wireless capacities and logical partitioning of physical deployments, yet introduce a number of challenges. These include among others the precise and rapid allocation of network resources among applications, elucidating the interactions between new mobile networking technology and widely used protocols, and the agile control of mobile infrastructure, to provide users with reliable wireless connectivity in extreme scenarios. This thesis presents several original contributions that address these challenges. In particular, I will first describe the design and evaluation of an airtime allocation and scheduling mechanism devised specifically for mm-wave backhauls, explicitly addressing inter-flow fairness and capturing the unique characteristics of mm-wave communications. Simulation results will demonstrate 5x throughput gains and a 5-fold improvement in fairness over recent mm-wave scheduling solutions. Second, I will introduce a utility optimisation framework targeting virtually sliced mm-wave backhauls that are shared by a number of applications with distinct requirements. Based on this framework, I will present a deep learning solution that can be trained within minutes, following which it computes rate allocations that match those obtained with state-of-the-art global optimisation algorithms. The proposed solution outperforms a baseline greedy approach by up to 62%, in terms of network utility, while running orders of magnitude faster. Third, the thesis investigates the behaviour of the Transport Control Protocol (TCP) in Long-Term Evolution (LTE) networks and discusses the implications of employing Radio Link Control (RLC) acknowledgements under different link qualities, on the performance of transport protocols. Fourth, I will introduce a reinforcement learning approach to optimising the performance of airborne cellular networks serving users in emergency settings, demonstrating rapid convergence (approx. 2.5 hours on a desktop machine) and a 5dB improvement of the median Signal-to-Noise-plus-Interference-Ratio (SINR) perceived by users, over a heuristic based benchmark solution. Finally, the thesis discusses promising future research directions that follow from the results obtained throughout this PhD project

    Allocation of Communication and Computation Resources in Mobile Networks

    Get PDF
    Konvergence komunikačních a výpočetních technologií vedlo k vzniku Multi-Access Edge Computing (MEC). MEC poskytuje výpočetní výkon na tzv. hraně mobilních sítí (základnové stanice, jádro mobilní sítě), který lze využít pro optimalizaci mobilních sítí v reálném čase. Optimalizacev reálném čase je umožněna díky nízkému komunikačnímu zpoždění například v porovnání s Mobile Cloud Computing (MCC). Optimalizace mobilních sítí vyžaduje informace o mobilní síti od uživatelských zařízeních, avšak sběr těchto informací využívá komunikační prostředky, které jsou využívány i pro přenos uživatelských dat. Zvyšující se počet uživatelských zařízení, senzorů a taktéž komunikace vozidel tvoří překážku pro sběr informací o mobilních sítích z důvodu omezeného množství komunikačních prostředků. Tudíž je nutné navrhnout řešení, která umožní sběr těchto informací pro potřeby optimalizace mobilních sítí. V této práci je navrženo řešení pro komunikaci vysokého počtu zařízeních, které je postaveno na využití přímé komunikace mezi zařízeními. Pro motivování uživatelů, pro využití přeposílání dat pomocí přímé komunikace mezi uživateli je navrženo přidělování komunikačních prostředků jenž vede na přirozenou spolupráci uživatelů. Dále je provedena analýza spotřeby energie při využití přeposílání dat pomocí přímé komunikace mezi uživateli pro ukázání jejích výhod z pohledu spotřeby energie. Pro další zvýšení počtu komunikujících zařízení je využito mobilních létajících základových stanic (FlyBS). Pro nasazení FlyBS je navržen algoritmus, který hledá pozici FlyBS a asociaci uživatel k FlyBS pro zvýšení spokojenosti uživatelů s poskytovanými datovými propustnostmi. MEC lze využít nejen pro optimalizaci mobilních sítí z pohledu mobilních operátorů, ale taktéž uživateli mobilních sítí. Tito uživatelé mohou využít MEC pro přenost výpočetně náročných úloh z jejich mobilních zařízeních do MEC. Z důvodu mobility uživatel je nutné nalézt vhodně přidělení komunikačních a výpočetních prostředků pro uspokojení uživatelských požadavků. Tudíž je navržen algorithmus pro výběr komunikační cesty mezi uživatelem a MEC, jenž je posléze rozšířen o přidělování výpočetných prostředků společně s komunikačními prostředky. Navržené řešení vede k snížení komunikačního zpoždění o desítky procent.The convergence of communication and computing in the mobile networks has led to an introduction of the Multi-Access Edge Computing (MEC). The MEC combines communication and computing resources at the edge of the mobile network and provides an option to optimize the mobile network in real-time. This is possible due to close proximity of the computation resources in terms of communication delay, in comparison to the Mobile Cloud Computing (MCC). The optimization of the mobile networks requires information about the mobile network and User Equipment (UE). Such information, however, consumes a significant amount of communication resources. The finite communication resources along with the ever increasing number of the UEs and other devices, such as sensors, vehicles pose an obstacle for collecting the required information. Therefore, it is necessary to provide solutions to enable the collection of the required mobile network information from the UEs for the purposes of the mobile network optimization. In this thesis, a solution to enable communication of a large number of devices, exploiting Device-to-Device (D2D) communication for data relaying, is proposed. To motivate the UEs to relay data of other UEs, we propose a resource allocation algorithm that leads to a natural cooperation of the UEs. To show, that the relaying is not only beneficial from the perspective of an increased number of UEs, we provide an analysis of the energy consumed by the D2D communication. To further increase the number of the UEs we exploit a recent concept of the flying base stations (FlyBSs), and we develop a joint algorithm for a positioning of the FlyBS and an association of the UEs to increase the UEs satisfaction with the provided data rates. The MEC can be exploited not only for processing of the collected data to optimize the mobile networks, but also by the mobile users. The mobile users can exploit the MEC for the computation offloading, i.e., transferring the computation from their UEs to the MEC. However, due to the inherent mobility of the UEs, it is necessary to determine communication and computation resource allocation in order to satisfy the UEs requirements. Therefore, we first propose a solution for a selection of the communication path between the UEs and the MEC (communication resource allocation). Then, we also design an algorithm for joint communication and computation resource allocation. The proposed solution then lead to a reduction in the computation offloading delay by tens of percent
    corecore