28 research outputs found

    Device-to-Device Communication in 5G Cellular Networks

    Get PDF
    Owing to the unprecedented and continuous growth in the number of connected users and networked devices, the next-generation 5G cellular networks are envisaged to support enormous number of simultaneously connected users and devices with access to numerous services and applications by providing networks with highly improved data rate, higher capacity, lower end-to-end latency, improved spectral efficiency, at lower power consumption. D2D communication underlaying cellular networks has been proposed as one of the key components of the 5G technology as a means of providing efficient spectrum reuse for improved spectral efficiency and take advantage of proximity between devices for reduced latency, improved user throughput, and reduced power consumption. Although D2D communication underlaying cellular networks promises lots of potentials, unlike the conventional cellular network architecture, there are new design issues and technical challenges that must be addressed for proper implementation of the technology. These include new device discovery procedures, physical layer architecture and radio resource management schemes. This thesis explores the potentials of D2D communication as an underlay to 5G cellular networks and focuses on efficient interference management solutions through mode selection, resource allocation and power control schemes. In this work, a joint admission control, resource allocation, and power control scheme was implemented for D2D communication underlaying 5G cellular networks. The performance of the system was evaluated, and comparisons were made with similar schemes.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Improved planning and resource management in next generation green mobile communication networks

    Get PDF
    In upcoming years, mobile communication networks will experience a disruptive reinventing process through the deployment of post 5th Generation (5G) mobile networks. Profound impacts are expected on network planning processes, maintenance and operations, on mobile services, subscribers with major changes in their data consumption and generation behaviours, as well as on devices itself, with a myriad of different equipment communicating over such networks. Post 5G will be characterized by a profound transformation of several aspects: processes, technology, economic, social, but also environmental aspects, with energy efficiency and carbon neutrality playing an important role. It will represent a network of networks: where different types of access networks will coexist, an increasing diversity of devices of different nature, massive cloud computing utilization and subscribers with unprecedented data-consuming behaviours. All at greater throughput and quality of service, as unseen in previous generations. The present research work uses 5G new radio (NR) latest release as baseline for developing the research activities, with future networks post 5G NR in focus. Two approaches were followed: i) method re-engineering, to propose new mechanisms and overcome existing or predictably existing limitations and ii) concept design and innovation, to propose and present innovative methods or mechanisms to enhance and improve the design, planning, operation, maintenance and optimization of 5G networks. Four main research areas were addressed, focusing on optimization and enhancement of 5G NR future networks, the usage of edge virtualized functions, subscriber’s behavior towards the generation of data and a carbon sequestering model aiming to achieve carbon neutrality. Several contributions have been made and demonstrated, either through models of methodologies that will, on each of the research areas, provide significant improvements and enhancements from the planning phase to the operational phase, always focusing on optimizing resource management. All the contributions are retro compatible with 5G NR and can also be applied to what starts being foreseen as future mobile networks. From the subscriber’s perspective and the ultimate goal of providing the best quality of experience possible, still considering the mobile network operator’s (MNO) perspective, the different proposed or developed approaches resulted in optimization methods for the numerous problems identified throughout the work. Overall, all of such contributed individually but aggregately as a whole to improve and enhance globally future mobile networks. Therefore, an answer to the main question was provided: how to further optimize a next-generation network - developed with optimization in mind - making it even more efficient while, simultaneously, becoming neutral concerning carbon emissions. The developed model for MNOs which aimed to achieve carbon neutrality through CO2 sequestration together with the subscriber’s behaviour model - topics still not deeply focused nowadays – are two of the main contributions of this thesis and of utmost importance for post-5G networks.Nos próximos anos espera-se que as redes de comunicações móveis se reinventem para lá da 5ª Geração (5G), com impactos profundos ao nível da forma como são planeadas, mantidas e operacionalizadas, ao nível do comportamento dos subscritores de serviços móveis, e através de uma miríade de dispositivos a comunicar através das mesmas. Estas redes serão profundamente transformadoras em termos tecnológicos, económicos, sociais, mas também ambientais, sendo a eficiência energética e a neutralidade carbónica aspetos que sofrem uma profunda melhoria. Paradoxalmente, numa rede em que coexistirão diferentes tipos de redes de acesso, mais dispositivos, utilização massiva de sistema de computação em nuvem, e subscritores com comportamentos de consumo de serviços inéditos nas gerações anteriores. O trabalho desenvolvido utiliza como base a release mais recente das redes 5G NR (New Radio), sendo o principal focus as redes pós-5G. Foi adotada uma abordagem de "reengenharia de métodos” (com o objetivo de propor mecanismos para resolver limitações existentes ou previsíveis) e de “inovação e design de conceitos”, em que são apresentadas técnicas e metodologias inovadoras, com o principal objetivo de contribuir para um desenho e operação otimizadas desta geração de redes celulares. Quatro grandes áreas de investigação foram endereçadas, contribuindo individualmente para um todo: melhorias e otimização generalizada de redes pós-5G, a utilização de virtualização de funções de rede, a análise comportamental dos subscritores no respeitante à geração e consumo de tráfego e finalmente, um modelo de sequestro de carbono com o objetivo de compensar as emissões produzidas por esse tipo de redes que se prevê ser massiva, almejando atingir a neutralidade carbónica. Como resultado deste trabalho, foram feitas e demonstradas várias contribuições, através de modelos ou metodologias, representando em cada área de investigação melhorias e otimizações, que, todas contribuindo para o mesmo objetivo, tiveram em consideração a retro compatibilidade e aplicabilidade ao que se prevê que sejam as futuras redes pós 5G. Focando sempre na perspetiva do subscritor da melhor experiência possível, mas também no lado do operador de serviço móvel – que pretende otimizar as suas redes, reduzir custos e maximizar o nível de qualidade de serviço prestado - as diferentes abordagens que foram desenvolvidas ou propostas, tiveram como resultado a resolução ou otimização dos diferentes problemas identificados, contribuindo de forma agregada para a melhoria do sistema no seu todo, respondendo à questão principal de como otimizar ainda mais uma rede desenvolvida para ser extremamente eficiente, tornando-a, simultaneamente, neutra em termos de emissões de carbono. Das principais contribuições deste trabalho relevam-se precisamente o modelo de compensação das emissões de CO2, com vista à neutralidade carbónica e um modelo de análise comportamental dos subscritores, dois temas ainda pouco explorados e extremamente importantes em contexto de redes futuras pós-5G

    Cost based optimization for strategic mobile radio access network planning using metaheuristics

    Get PDF
    La evolución experimentada por las comunicaciones móviles a lo largo de las últimas décadas ha sido motivada por dos factores principales: el surgimiento de nuevas aplicaciones y necesidades por parte del usuario, así como los avances tecnológicos. Los servicios ofrecidos para términales móviles han evolucionado desde el clásico servicio de voz y mensajes cortos (SMS), a servicios más atractivos y por lo tanto con una rápida aceptación por parte de usuario final como, video telephony, video streaming, online gaming, and the internet broadband access (MBAS). Todos estos nuevos servicios se han convertido en una realidad gracias a los avances técnologicos, avances tales como nuevas técnicas de acceso al medio compartido, nuevos esquemas de codificiación y modulación de la información intercambiada, sistemas de transmisión y recepción basados en múltiples antenas (MIMO), etc. Un aspecto importante en esta evolución fue la liberación del sector a principios de los años 90, donde la función reguladora llevado a cabo por las autoridades regulatorias nacionales (NRA) se ha antojado fundamental. Uno de los principales problemas tratados por la NRA espcífica de cada nación es la determinación de los costes por servicios mayoristas, esto es los servicios entre operadores de servicios móvilles, entre los que cabe destacar el coste por terminación de llamada o de inteconexión. El servicio de interconexión hace posible la comunicación de usuarios de diferente operadores, así como el acceso a la totalidad de servicios, incluso a aquellos no prestados por un operador en concreto gracias al uso de una red perteneciente a otro operador, por parte de todos los usuarios. El objetivo principal de esta tesis es la minimización de los costes de inversión en equipamiento de red, lo cual repercute en el establecimiento de las tarifas de interconexión como se verá a lo largo de este trabajo. La consecución de dicho objetivo se divide en dos partes: en primer lugar, el desarrollo de un conjunto de algoritmos para el dimesionado óptimo de una red de acceso radio (RAN) para un sistema de comunicaciones móvilles. En segundo lugar, el diseño y aplicación de algoritmos de optimización para la distribución óptima de los servicios sobre el conjunto de tecnologías móviles existentes (OSDP). El modulo de diseño de red proporciona cuatro algoritmos diferenciados encargados del dimensionado y planificación de la red de acceso móvil. Estos algoritmos se aplican en un entorno multi-tecnología, considerando sistemas de segunda (2G), tercera (3G) y cuarta (4G) generación, multi-usuario, teniendo en cuenta diferentes perfiles de usuarios con su respectiva carga de tráfico, y multo-servicio, incluyendo voz, servicios de datos de baja velocidad (64-144 Kbps), y acceso a internet de banda ancha móvil. La segunda parte de la tesis se encarga de distribuir de una manera óptima el conjunto de servicios sobre las tecnologías a desplegar. El objetivo de esta parte es hacer un uso eficiente de las tecnologías existentes reduciendo los costes de inversión en equipamiento de red. Esto es posible gracias a las diferencias tecnológicas existente entre los diferentes sistemas móviles, que hacen que los sistemas de segunda generación sean adecuados para proporcionar el servicio de voz y mensajería corta, mientras que redes de tercera generación muestran un mejor rendimiento en la transmisión de servicios de datos. Por último, el servicio de banda ancha móvil es nativo de redes de última generadón, como High Speed Data Acces (HSPA) y 4G. Ambos módulos han sido aplicados a un extenso conjunto de experimentos para el desarrollo de análisis tecno-económicos tales como el estudio del rendimiento de las tecnologías de HSPA y 4G para la prestación del servicio de banda ancha móvil, así como el análisis de escenarios reales de despliegue para redes 4G que tendrán lugar a partir del próximo año coinicidiendo con la licitación de las frecuencias en la banda de 800 MHz. Así mismo, se ha llevado a cabo un estudio sobre el despliegue de redes de 4G en las bandas de 800 MHz, 1800 MHz y 2600 MHz, comparando los costes de inversión obtenidos tras la optimización. En todos los casos se ha demostrado la mejora, en términos de costes de inversión, obtenida tras la aplicación de ambos módulos, posibilitando una reducción en la determinación de los costes de provisión de servicios. Los estudios realizados en esta tesis se centran en la nación de España, sin embargo todos los algoritmos implementados son aplicables a cualquier otro país europeo, prueba de ello es que los algoritmos de diseño de red han sido utilizados en diversos proyectos de regulación

    Cost based optimization for strategic mobile radio access network planning using metaheuristics

    Get PDF
    La evolución experimentada por las comunicaciones móviles a lo largo de las últimas décadas ha sido motivada por dos factores principales: el surgimiento de nuevas aplicaciones y necesidades por parte del usuario, así como los avances tecnológicos. Los servicios ofrecidos para términales móviles han evolucionado desde el clásico servicio de voz y mensajes cortos (SMS), a servicios más atractivos y por lo tanto con una rápida aceptación por parte de usuario final como, video telephony, video streaming, online gaming, and the internet broadband access (MBAS). Todos estos nuevos servicios se han convertido en una realidad gracias a los avances técnologicos, avances tales como nuevas técnicas de acceso al medio compartido, nuevos esquemas de codificiación y modulación de la información intercambiada, sistemas de transmisión y recepción basados en múltiples antenas (MIMO), etc. Un aspecto importante en esta evolución fue la liberación del sector a principios de los años 90, donde la función reguladora llevado a cabo por las autoridades regulatorias nacionales (NRA) se ha antojado fundamental. Uno de los principales problemas tratados por la NRA espcífica de cada nación es la determinación de los costes por servicios mayoristas, esto es los servicios entre operadores de servicios móvilles, entre los que cabe destacar el coste por terminación de llamada o de inteconexión. El servicio de interconexión hace posible la comunicación de usuarios de diferente operadores, así como el acceso a la totalidad de servicios, incluso a aquellos no prestados por un operador en concreto gracias al uso de una red perteneciente a otro operador, por parte de todos los usuarios. El objetivo principal de esta tesis es la minimización de los costes de inversión en equipamiento de red, lo cual repercute en el establecimiento de las tarifas de interconexión como se verá a lo largo de este trabajo. La consecución de dicho objetivo se divide en dos partes: en primer lugar, el desarrollo de un conjunto de algoritmos para el dimesionado óptimo de una red de acceso radio (RAN) para un sistema de comunicaciones móvilles. En segundo lugar, el diseño y aplicación de algoritmos de optimización para la distribución óptima de los servicios sobre el conjunto de tecnologías móviles existentes (OSDP). El modulo de diseño de red proporciona cuatro algoritmos diferenciados encargados del dimensionado y planificación de la red de acceso móvil. Estos algoritmos se aplican en un entorno multi-tecnología, considerando sistemas de segunda (2G), tercera (3G) y cuarta (4G) generación, multi-usuario, teniendo en cuenta diferentes perfiles de usuarios con su respectiva carga de tráfico, y multo-servicio, incluyendo voz, servicios de datos de baja velocidad (64-144 Kbps), y acceso a internet de banda ancha móvil. La segunda parte de la tesis se encarga de distribuir de una manera óptima el conjunto de servicios sobre las tecnologías a desplegar. El objetivo de esta parte es hacer un uso eficiente de las tecnologías existentes reduciendo los costes de inversión en equipamiento de red. Esto es posible gracias a las diferencias tecnológicas existente entre los diferentes sistemas móviles, que hacen que los sistemas de segunda generación sean adecuados para proporcionar el servicio de voz y mensajería corta, mientras que redes de tercera generación muestran un mejor rendimiento en la transmisión de servicios de datos. Por último, el servicio de banda ancha móvil es nativo de redes de última generadón, como High Speed Data Acces (HSPA) y 4G. Ambos módulos han sido aplicados a un extenso conjunto de experimentos para el desarrollo de análisis tecno-económicos tales como el estudio del rendimiento de las tecnologías de HSPA y 4G para la prestación del servicio de banda ancha móvil, así como el análisis de escenarios reales de despliegue para redes 4G que tendrán lugar a partir del próximo año coinicidiendo con la licitación de las frecuencias en la banda de 800 MHz. Así mismo, se ha llevado a cabo un estudio sobre el despliegue de redes de 4G en las bandas de 800 MHz, 1800 MHz y 2600 MHz, comparando los costes de inversión obtenidos tras la optimización. En todos los casos se ha demostrado la mejora, en términos de costes de inversión, obtenida tras la aplicación de ambos módulos, posibilitando una reducción en la determinación de los costes de provisión de servicios. Los estudios realizados en esta tesis se centran en la nación de España, sin embargo todos los algoritmos implementados son aplicables a cualquier otro país europeo, prueba de ello es que los algoritmos de diseño de red han sido utilizados en diversos proyectos de regulación

    Monitoring and testing in LTE networks: from experimental analysis to operational optimisation

    Get PDF
    L'avvento di LTE e LTE-Adavanced, e la loro integrazione con le esistenti tecnologie cellulari, GSM e UMTS, ha costretto gli operatori di rete radiomobile ad eseguire una meticolosa campagna di test e a dotarsi del giusto know-how per rilevare potenziali problemi durante il dispiegamento di nuovi servizi. In questo nuovo scenario di rete, la caratterizzazione e il monitoraggio del traffico nonchè la configurazione e l'affidibilità degli apparati di rete, sono di importanza fondamentale al fine di prevenire possibili insidie durante la distribuzione di nuovi servizi e garantire la migliore esperienza utente possibile. Sulla base di queste osservazioni, questa tesi di dottorato offre un percorso completo di studio che va da un'analisi sperimentale ad un'ottimizzazione operativa. Il punto di partenza del nostro lavoro è stato il monitoraggio del traffico di un eNodeB di campo con tre celle, operativo nella banda 1800 MHz. Tramite campagne di misura successive, è stato possibile seguire l'evoluzione della rete 4G dagli albori del suo dispiegamento nel 2012, fino alla sua completa maturazione nel 2015. I dati raccolti durante il primo anno, evidenziavano uno scarso utilizzo della rete LTE, dovuto essenzialmente alla limitata penetrazione dei nuovi smartphone 4G. Nel 2015, invece, abbiamo assistito ad un aumento netto e decisivo del numero di utenti che utilizzano la tecnolgia LTE, con statistiche aggregate (come gli indici di marketshare per i sistemi operativi degli smartphones, o la percentuale di traffico video) che rispecchiano i trend nazionali e internazionali. Questo importante risultato testimonia la maturità della tecnologia LTE, e ci permette di considerare il nostro eNodeB un punto di osservazione prezioso per l'analisi del traffico. Di pari passo con l'evoluzione dell'infrastruttura, anche i telefoni cellulari hanno avuto una sorprendente evoluzione nel corso degli ultimi due decenni, a partire da dispositivi semplici con servizi di sola voce, fino agli smartphone di ultima generazione che offrono servizi innovativi, come Internet mobile, geolocalizzazione e mappe, servizi multimediali, e molti altri. Monitorare il traffico reale ci ha quindi permesso di studiare il comportamento degli utenti e individuare i servizi maggiormente utilizzati. Per questo, sono state sviluppate diverse librerie software per l'analisi del traffico. In particolare, è stato sviluppato in C++14 un framework/tool per la classificazione del traffico. Il progetto, disponibile su github, si chiama MOSEC, un acronimo per MOdular SErvice Classifier. MOSEC consente di definire e utilizzare un numero arbitrario di plug-in, che processano il pacchetto secondo le loro logiche e possono o no ritornare un valore di classificazione. Una strategia di decisione finale consente di classificare i vari flussi, basandosi sulle classificazioni di ciascun plug-in. Abbiamo quindi validato la bontà del processi di classificazione di MOSEC utilizzando una traccia labellata come ground-truth di classificazione. I risultati mostrano una eccellente capacità di classificazione di traffico TCP-HTTP/HTTPS, mediamente superiore a quella di altri tool di classificazione (nDPI, PACE, Layer-7), ed evidenzia alcune lacune per quanto riguarda la classificazione di traffico UDP. Le carattistiche dei flussi di traffico utente (User Plane) hanno un impatto diretto sul consumo energetico dei terminali e indiretto sul traffico di controllo (Control Plane) che viene generato. Pertanto, la conoscenza delle proprietà statistiche dei vari flussi consente di affrontare un problema del cross-layer optimization, per ridurre il consumo energetico dei terminali variando dei parametri configurabili sugli eNodeB. E' noto che la durata della batteria dei nuovi smartphone, rappresenta uno dei maggiori limiti nell'utilizzo degli stessi. In particolare, lo sviluppo di nuovi servizi e applicazioni capaci di lavorare in background, senza la diretta interazione dell’utente, ha introdotto nuovi problemi riguardanti la durata delle batterie degli smartphone e il traffico di segnalazione necessario ad acquisire/rilasciare le risorse radio. In conformità a queste osservazioni, è stato condotto uno studio approfondito sul meccanismo DRX (Discontinuous Reception), usato in LTE per consentire all’utente di risparmiare energia quando nessun pacchetto è inviato o ricevuto. I parametri DRX e RRC Inactivity Timer influenzano notevolmente l’energia consumata dai vari device. A seconda che le risorse radio siano assegnate o meno, l’UE si trova rispettivamente negli stati di RRC Connected e RRC Idle. Per valutare il consumo energetico degli smartphone, è stato sviluppato un algoritmo che associa un valore di potenza a ciascuno degli stati in cui l’UE può trovarsi. La transizione da uno stato all’altro è regolata da diversi timeout che sono resettati ogni volta che un pacchetto è inviato o ricevuto. Utilizzando le tracce di traffico reale, è stata associata una macchina a stati a ogni UE per valutare il consumo energetico sulla base dei pacchetti inviati e ricevuti. Osservando le caratteristiche statistiche del traffico User Plane è stata ripetuta la simulazione utilizzando dei valori dell’Inactivity Timer diversi da quello impiegato negli eNodeB di rete reale, alla ricerca di un buon trade-off tra risparmio energetico e aumento del traffico di segnalazione. I risultati hanno permesso di determinare che l'Inactivity Timer, settato originariamente sull'eNodeB era troppo elevato e determinava un consumo energetico eccesivo sui terminali. Diminuendone il valore fino a 10 secondi, si può ottenere un risparmio energetico fino al 50\% (a secondo del traffico generato) senza aumentare considerevolemente il traffico di controllo. I risultati dello studio di cui sopra, tuttavia, non tengono in considerazione lo stato di stress cui può essere sottoposto un eNodeB per effetto dell'aumento del traffico di segnalazione, nè, tantomeno, dell'aumento della contesa di accesso alla rete durante la procedura di RACH, necessaria per ristabilire il bearer radio (o connessione RRC) tra terminale ed eNodeB. Valutare le performance di sistemi hardware e software per la rete mobile di quarta generazione, cosi come individuare qualsiasi possibile debolezza all’interno dell’architettura, è un lavoro complesso. Un possibile caso di studio, è proprio quello di valutare la robustezza delle Base Station quando riceve molte richieste di connessioni RRC, per effetto di una diminuzione dell'Inactivity Timer. A tal proposito, all’interno del Testing LAB di Telecom Italia, abbiamo utilizzato IxLoad, un prodotto sviluppato da Ixia, come generatore di carico per testare la robustezza di un eNodeB. I test sono consistiti nel produrre un differente carico di richieste RRC sull'interfaccia radio, similmente a quelle che si avrebbero diminuendo l'Inactivity Timer. Le proprietà statistiche del traffico di controllo sono ricavate a partire dall'analisi dalle tracce di traffico reale. I risultati hanno dimostrato che, anche a fronte di un carico sostenuto di richieste RRC solo una minima parte (percentuale inferiore all'1\% nel caso più sfavorevole) di procedure fallisce. Abbassare l'inactivity timer anche a valori inferiori ai 10 secondi non è quindi un problema per la Base Station. Rimane da valutare, infine, cosa succede a seguito dell'aumento delle richieste di accesso al canale RACH, dal punto di vista degli utenti. Quando due o più utenti tentano, simultaneamente, di accedere al canale RACH, utilizzando lo stesso preambolo, l’eNodeB potrebbe non essere in grado di decifrare il preambolo. Se i due segnali interferiscono costruttivamente, entrambi gli utenti riceveranno le stesse risorse per trasmettere il messaggio di RRC Request e, a questo punto, l’eNodeB può individuare la collisione e non trasmetterà nessun acknowledgement, forzando entrambi gli utenti a ricominciare la procedura dall’inizio. Abbiamo quindi proposto un modello analitico per calcolare la probabilità di collisione in funzione del numero di utenti e del carico di traffico offerto, quando i tempi d’interarrivo tra richieste successive é modellata con tempi iper-esponenziali. In più, abbiamo investigato le prestazioni di comunicazioni di tipo Machine-to-Machine (M2M) e Human-to-Human (H2H), valutando, al variare del numero di preamboli utilizzati, la probabilità di collisione su canale RACH, la probabilità di corretta trasmissione considerando sia il tempo di backoff che il numero massimo di ritrasmissioni consentite, e il tempo medio necessario per stabilire un canale radio con la rete di accesso. I risultati, valutati nel loro insieme, hanno consentito di esprimere delle linee guida per ripartire opportunamente il numero di preamboli tra comunicazioni M2M e H2H. The advent of LTE and LTE-Advanced, and their integration with existing cellular technologies, GSM and UMTS, has forced the mobile radio network operators to perform meticulous tests and adopt the right know-how to detect potential new issues, before the activation of new services. In this new network scenario, traffic characterisation and monitoring as well as configuration and on-air reliability of network equipment, is of paramount relevance in order to prevent possible pitfalls during the deployment of new services and ensure the best possible user experience. Based on this observation, this research project offers a comprehensive study that goes from experimental analysis to operational optimization. The starting point of our work has been monitoring the traffic of an already deployed eNodeB with three cells, operative in the 1800 MHz band. Through subsequent measurement campaigns, it was possible to follow the evolution of the 4G network by the beginning of its deployment in 2012, until its full maturity in 2015. The data collected during the first year, showed a poor use of the LTE network, mainly due to the limited penetration of new 4G smartphone. In 2015, however, we appreciate a clear and decisive increase in the number of terminals using LTE, with aggregate statistics (e.g. marketshare for smartphone operating systems, or the percentage of video traffic) that reflect the national trend. This important outcome testifies the maturity of LTE technology, and allows us to consider our monitored eNodeB as a valuable vantage point for traffic analysis. Hand in hand with the evolution of the infrastructure, even mobile phones have had a surprising evolution over the past two decades, from simple devices with only voice services, towards smartphones offering novel services such as mobile Internet, geolocation and maps, multimedia services, and many more. Monitoring the real traffic has allowed us to study the users behavior and identify the services most used. To this aim, various software libraries for traffic analysis have been developed. In particular, we developed a C/C++ library that analyses Control Plane and User Plane traffic, which provides corse and fined-grained statistics at flow-level. Another framework/tool has been exclusively dedicated to the topic of traffic classification. Among the plethora of existing tool for traffic classification we provide our own solution, developed from scratch. The project, which is available on github, is named MOSEC, an acronym for Modular SErvice Classifier. The modularity is given by the possibility to implement multiple plug-ins, each one will process the packet according to its logic, and may or may not return a packet/flow classification. A final decision strategy allows to classify the various streams, based on the classifications of each plug-in. Despite previous approaches, the ability of keeping together multiple classifiers allows to mitigate the deficiency of each classifiers (e.g. DPI\nomenclature{DPI}{Deep Packet Inspection} does not work when packets are encrypted or DNS\nomenclature{DNS}{Domain Name Server} queries don't have to be sent if name resolution is cached in device memory) and exploit their full-capabilities when it is feasible. We validated the goodness of MOSEC using a labelled trace synthetically created by colleagues from UPC BarcelonaTech. The results show excellent TCP-HTTP/HTTPS traffic classification capabilities, higher, on average, than those of other classification tools (NDPI, PACE, Layer-7). On the other hand, there are some shortcomings with regard to the classification of UDP traffic. The characteristics of User Plane traffic have a direct impact on the energy consumed by the handset devices, and an indirect impact on the Control Plane traffic that is generated. Therefore, the acquaintances of the statistical properties of the various flows, allows us to deal with the problem of cross-layer optimization, that is reducing the power consumption of the terminals by varying some control plane parameters configurable on the eNodeB. It is well known that the battery life of the new smartphones is one of the major limitations in the use of the same. In particular, the birth of new services and applications capable of working in the background without direct user interaction, introduced new issues related to the battery lifetime and the signaling traffic necessary to acquire/release the radio resources. Based on these observations, we conducted a thorough study on the DRX mechanism (Discontinuous Reception), exploited by LTE to save smartphones energy when no packet is sent or received. The DRX configuration set and the RRC Inactivity Timer greatly affect the energy consumed by the various devices. Depending on which radio resources are allocated or not, the user equipment is in the states of RRC Connected and Idle, respectively. To evaluate the energy consumption of smartphones, an algorithm simulates the transition between all the possible states in which an UE can be and maps a power value to each of these states. The transition from one state to another is governed by different timeouts that are reset every time a packet is sent or received. Using the traces of real traffic, we associate a state machine to each for assessing the energy consumption on the basis of the sent and received packets. We repeated these simulations using different values of the inactivity timer, that appear to be more suitable than the one currently configured on the monitored eNodeB, looking for a good trade-off between energy savings and increased signaling traffic. The results highlighted that the Inactivity Timer set originally sull'eNodeB was too high and determined an excessive energy consumption on the terminals. Reducing the value up to 10 seconds permits to achieve energy savings of up to 50\% (depending on the underling traffic profile) without up considerably the control traffic. The results of the study mentioned above, however, do not consider neither the stress level which the eNodeB is subject to, given the raise of signaling traffic that could occur, nor the increase of collision probability during the RACH procedure, needed to re-establish the radio bearer (or RRC connection ) between the terminal and eNodeB . Evaluate the performance of hardware and software systems for the fourth-generation mobile network, as well as identify any possible weakness in the architecture, it is a complex job. A possible case study, is precisely to assess the robustness of the base station when it receives many requests for RRC connections, as effect of a decrease of the inactivity timer. In this regard, within the Testing LAB of Telecom Italia, we used IxLoad, a product developed by Ixia, as a load generator to test the robustness of one eNodeB. The tests consisted in producing a different load of RRC request on the radio interface, similar to those that would be produced by decreasing the inactivity timer to certain values. The statistical properties for the signalling traffic are derived from the analysis of real traffic traces. The main outcomes have shown that, even in the face of an high load of RRC requests only a small part (less than 1\% in the most unfavorable of the cases) of the procedure fails. Therefore, even lowering the inactivity timer at values lower than 10 seconds is not an issue for the Base Station. Finally, remains to be evaluated how such surge of RRC request impacts on users performance. If one of the users under coverage in the RRC Idle is paged for an incoming packet or need to send an uplink packet a state transition from RRC Idle to RRC Connected is needed. At this point, the UE initiates the random access procedure by sending the random access channel preamble (RACH Preamble). When two or more users attempt, simultaneously, to access the RACH channel, using the same preamble, the eNodeB may not be able to decipher the preamble. If the two signals interfere constructively, both users receive the same resources for transmitting the RRC Request message and, at this point, the eNodeB can detect the collision and will not send any acknowledgment, forcing both users to restart the procedure from the beginning. We have proposed an analytical model to calculate the probability of a collision based on the number of users and the offered traffic load, when the interarrival time between requests is modeled with hyper-exponential times. In addition, we investigated some performance for Machine-to-Machine (M2M) and Human-to-Human (H2H) type communications, including the probability of correct transmission considering either the backoff time either the maximum number of allowed retransmissions, and the average time required to established a radio bearer with the access network. The results, considered as a whole, have made possible to express the guidelines to properly distribute the number of preambles in H2H and M2M communications

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    Planning and optimisation of 4G/5G mobile networks and beyond

    Get PDF
    As mobile networks continue to evolve, two major problems have always existed that greatly affect the quality of service that users experience. These problems are (1) efficient resource management for users at the edge of the network and those in a network coverage hole. (2) network coverage such that improves the quality of service for users while keeping the cost of deployment very low. In this study, two novel algorithms (Collaborative Resource Allocation Algorithm and Memetic-Bee-Swarm Site Location-Allocation Algorithm) are proposed to solve these problems. The Collaborative Resource Allocation Algorithm (CRAA) is inspired by lending and welfare system from the field of political economy and developed as a Market Game. The CRAA allows users to collaborate through coalition formation for cell edge users and users with less than the required Signal-to-Noise-plus-Interference-Ratio to transmit at satisfactory Quality of Service, which is a result of the payoff, achieved and distributed using the Shapley value computed using the Owens Multi Linear Extension function. The Memetic-Bee-Swarm Site Location-Allocation Algorithm (MBSSLAA) is inspired by the behaviour of the Memetic algorithm and Bee Swarm Algorithm for site location. Series of System-level simulations and numerical evaluations were run to evaluate the performance of the algorithms. Numerical evaluation and simulations results show that the Collaborative Resource Allocation Algorithm compared with two popular Long Term Evolution-Advanced algorithms performs higher in comparison when assessed using throughput, spectral efficiency and fairness. Also, results from the simulation of MBSSLAA using realistic network design parameter values show significant higher performance for users in the coverage region of interest and signifies the importance of the ultra-dense small cells network in the future of telecommunications’ services to support the Internet of Things. The results from the proposed algorithms show that following from the existing solutions in the literature; these algorithms give higher performance than existing works done on these problems. On the performance scale, the CRAA achieved an average of 30% improvement on throughput and spectral efficiency for the users of the network. The results also show that the MBSSLAA is capable of reducing the number of small cells in an ultra-dense small cell network while providing the requisite high data coverage. It also indicates that this can be achieved while maintaining high SINR values and throughput for the users, therefore giving them a satisfactory level of quality of service which is a significant requirement in the Fifth Generation network’s specification

    Access network selection schemes for multiple calls in next generation wireless networks

    Get PDF
    There is an increasing demand for internet services by mobile subscribers over the wireless access networks, with limited radio resources and capacity constraints. A viable solution to this capacity crunch is the deployment of heterogeneous networks. However, in this wireless environment, the choice of the most appropriate Radio Access Technology (RAT) that can Tsustain or meet the quality of service (QoS) requirements of users' applications require careful planning and cost efficient radio resource management methods. Previous research works on access network selection have focused on selecting a suitable RAT for a user's single call request. With the present request for multiple calls over wireless access networks, where each call has different QoS requirements and the available networks exhibit dynamic channel conditions, the choice of a suitable RAT capable of providing the "Always Best Connected" (ABC) experience for the user becomes a challenge. In this thesis, the problem of selecting the suitable RAT that is capable of meeting the QoS requirements for multiple call requests by mobile users in access networks is investigated. In addressing this problem, we proposed the use of Complex PRoprtional ASsesment (COPRAS) and Consensus-based Multi-Attribute Group Decision Making (MAGDM) techniques as novel and viable RAT selection methods for a grouped-multiple call. The performance of the proposed COPRAS multi-attribute decision making approach to RAT selection for a grouped-call has been evaluated through simulations in different network scenarios. The results show that the COPRAS method, which is simple and flexible, is more efficient in the selection of appropriate RAT for group multiple calls. The COPRAS method reduces handoff frequency and is computationally inexpensive when compared with other methods such as the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), Simple Additive Weighting (SAW) and Multiplicative Exponent Weighting (MEW). The application of the proposed consensus-based algorithm in the selection of a suitable RAT for group-multiple calls, comprising of voice, video-streaming, and file-downloading has been intensively investigated. This algorithm aggregates the QoS requirement of the individual application into a collective QoS for the group calls. This new and novel approach to RAT selection for a grouped-call measures and compares the consensus degree of the collective solution and individual solution against a predefined threshold value. Using the methods of coincidence among preferences and coincidence among solutions with a predefined consensus threshold of 0.9, we evaluated the performance of the consensus-based RAT selection scheme through simulations under different network scenarios. The obtained results show that both methods of coincidences have the capability to select the most suitable RAT for a group of multiple calls. However, the method of coincidence among solutions achieves better results in terms of accuracy, it is less complex and the number of iteration before achieving the predefined consensus threshold is reduced. A utility-based RAT selection method for parallel traffic-streaming in an overlapped heterogeneous wireless network has also been developed. The RAT selection method was modeled with constraints on terminal battery power, service cost and network congestion to select a specified number of RATs that optimizes the terminal interface utility. The results obtained show an optimum RAT selection strategy that maximizes the terminal utility and selects the best RAT combinations for user's parallel-streaming for voice, video and file-download

    Resource Allocation in LTE Advanced for QoS and Energy Efficiency

    Get PDF
    Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are establishing themselves as the new standard of 4G cellular networks in Europe and in several other parts of the world. Their success will largely depend on their ability to support Quality of Service for different types of users, at reasonable costs. The quality of service will depend on how effectively the cell bandwidth is shared among the users. The cost will depend – among many other factors – on how effectively we exploit the cell capacity. Being able to exploit bandwidth efficiently postpones the time when network upgrades are required. On the other hand, operation costs also depend on the energy efficiency of the cellular network, which should avoid wasting power when few users are connected. As for bandwidth efficiency, the recent LTE/LTE-A standards introduced MIMO (Multiple Input Multiple Output) transmission modes, which allow both reliability and efficiency to be increased. MIMO can increase the throughput significantly. In a MIMO system, the selection of the MIMO transmission modes (whether Transmission Diversity, Spatial Multiplexing, or Multi-User MIMO) plays a key feature in determining the achievable rate and the error probability experienced by the users. MIMO-unaware scheduling policies, which neglect the transmission mode selection problem, do not perform well under MIMO. In the current literature, few MIMO-aware LTE-A scheduling policies have been designed. However, despite being proposed for LTE-A, these solutions do not take into account some constraints inherent to LTE-A, hence leading to unfeasible allocations. In this work, we propose a new framework for Transmission Mode Selection and Frequency. Domain Packet Scheduling, which is compliant with the constraints of the LTE-A standard. The resource allocation framework accommodates real-time requirements and fairness on demand, while the bulk of the resources are allocated in an opportunistic fashion, i.e. so as to maximize the cell throughput. Our results show that our proposal provides real-time connections with the desired level of QoS, without utterly sacrificing the cell throughput. As far as energy efficiency is concerned, we studied the problem of minimizing the RF power used by the eNodeB, while maintaining the same level of service for the users. We devised a provisioning framework that exploits the Multicast/Broadcast over a Single Frequency Network (MBSFN) mechanism to deactivate the eNodeB on some Transmission Time Intervals (TTI), and computes the minimum-power activation required for guaranteeing a given level of service. Our results show that the provisioning framework is stable, and that it allows significant savings with respect to an always-on policy, with marginal impact on the latency experienced by the users
    corecore