1,005 research outputs found

    An open source multi-slice cell capacity framework

    Get PDF
    Número especial con los mejores papers de 2021.5G is the new 3GPP technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose bandwidth and delay requirements. Network Slicing is a key paradigm in 5G, and future 6G networks will inherit it for the concurrent provisioning of diverse quality of service. As scheduling is always a delicate vendor topic and there are few free and complete simulation tools to support all 5G features, in this paper, we present Py5cheSim. This is a flexible and open-source simulator based on Python and specially oriented to simulate cell capacity in 3GPP 5G networks and beyond. To the best of our knowledge, Py5cheSim is the first simulator that supports Network Slicing at the Radio Access Network level. It offers an environment that allows the development of new scheduling algorithms in a researcher-friendly way without the need of detailed knowledge of the core of the tool. The present work describes its design and implementation choices, the validation process, the results and different use cases.Proyecto: FVF-2021-128– DICYT. Fondo Carlos Vaz Ferreira, Convocatoria 2021, Dirección Nacional de Innovación, Ciencia y Tecnología, Ministerio de Educación y Cultura, UruguayProyecto: FMV_1_2019_1_155700 "Inteligencia Artificial aplicada a redes 5G", Agencia Nacional de Investigación e Innovación, Urugua

    Scheduling in 5G networks : Developing a 5G cell capacity simulator.

    Get PDF
    La quinta generación de comunicaciones móviles (5G) se está convirtiendo en una realidad gracias a la nueva tecnología 3GPP (3rd Generation Partnership Project) diseñada para cumplir con una amplia gama de requerimientos. Por un lado, debe poder soportar altas velocidades y servicios de latencia ultra-baja, y por otro lado, debe poder conectar una gran cantidad de dispositivos con requerimientos laxos de ancho de banda y retardo. Esta diversidad de requerimientos de servicio exige un alto grado de flexibilidad en el diseño de la interfaz de radio. Dado que la tecnología LTE (Long Term Evolution) se diseñó originalmente teniendo en cuenta la evolución de los servicios de banda ancha móvil, no proporciona suficiente flexibilidad para multiplexar de manera óptima los diferentes tipos de servicios previstos por 5G. Esto se debe a que no existe una única configuración de interfaz de radio capaz de adaptarse a todos los diferentes requisitos de servicio. Como consecuencia, las redes 5G se están diseñando para admitir diferentes configuraciones de interfaz de radio y mecanismos para multiplexar estos diferentes servicios con diferentes configuraciones en el mismo espectro disponible. Este concepto se conoce como Network Slicing y es una característica clave de 5G que debe ser soportada extremo a extremo en la red (acceso, transporte y núcleo). De esta manera, las Redes de Acceso (RAN) 5G agregarán el problema de asignación de recursos para diferentes servicios al problema tradicional de asignación de recursos a distintos usuarios. En este contexto, como el estándar no describe cómo debe ser la asignación de recursos para usuarios y servicios (quedando libre a la implementación de los proveedores) se abre un amplio campo de investigación. Se han desarrollado diferentes herramientas de simulación con fines de investigación durante los últimos años. Sin embargo, como no muchas de estas son libres, fáciles de usar y particularmente ninguna de las disponibles soporta Network Slicing a nivel de Red de Acceso, este trabajo presenta un nuevo simulador como principal contribución. Py5cheSim es un simulador simple, flexible y de código abierto basado en Python y especialmente orientado a probar diferentes algoritmos de scheduling para diferentes tipos de servicios 5G mediante una implementación simple de la funcionalidad RAN Slicing. Su arquitectura permite desarrollar e integrar nuevos algoritmos para asignación de recursos de forma sencilla y directa. Además, el uso de Python proporciona suficiente versatilidad para incluso utilizar herramientas de Inteligencia Artificial para el desarrollo de nuevos algoritmos. Este trabajo presenta los principales conceptos de diseño de las redes de acceso 5G que se tomaron como base para desarrollar la herramienta de simulación. También describe decisiones de diseño e implementación, seguidas de las pruebas de validación ejecutadas y sus principales resultados. Se presentan además algunos ejemplos de casos de uso para mostrar el potencial de la herramienta desarrollada, proporcionando un análisis primario de los algoritmos tradicionales de asignación de recursos para los nuevos tipos de servicios previstos por la tecnología. Finalmente se concluye sobre la contribución de la herramienta desarrollada, los resultados de los ejemplos incluyendo posibles líneas de investigación junto con posibles mejoras para futuras versiones.The fifth generation of mobile communications (5G) is already becoming a reality by the new 3GPP (3rd Generation Partnership Project) technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose band width and delay requirements. Such diversity in terms of service requirements demands a high degree of flexibility in radio interface design. As LTE (Long Term Evolution) technology was originally designed with Mobile Broadband (MBB) services evolution in mind it does not provide enough flexibility to multiplex optimally the different types of services envisioned by 5G. This is because there is not a unique radio interface configuration able to fit all the different service requirements. As a consequence, 5G networks are being designed to support different radio interface configurations and mechanisms to multiplex these different services with different configurations in the same available spectrum. This concept is known as Network Slicing, and isa 5G key feature which needs to be supported end to end in the network (Radio Access, Transport and Core Network). In this way 5G Radio Access Networks (RAN) will add the resource allocation for different services problem to the user resource allocation traditional one. In this context, as both users and services scheduling is being left to vendor implementation by the standard, an extensive field of research is open. Different simulation tools have been developed for research purposes during the last years. However, as not so many of them are free, easy to use, and particularly none of the available ones supports Network Slicing at RAN level, this work presents a new simulator as its main contribution. Py5cheSim is a simple, flexible and open-source simulator based on Pythonand specially oriented to test different scheduling algorithms for 5G different types of services through a simple implementation of RAN Slicing feature. Its architecture allows to develop and integrate new scheduling algorithms in a easy and straight forward way. Furthermore, the use of Python provides enough versatility to even use Machine Learning tools for the development of new scheduling algorithms. The present work introduces the main 5G RAN design concepts which were taken as a baseline to develop the simulation tool. It also describes its design and implementation choices followed by the executed validation tests and its main results. Additionally this work presents a few use cases examples to show the developed tool’s potential providing a primary analysis of traditional scheduling algorithms for the new types of services envisioned by the technology. Finally it concludes about the developed tool contribution, the example results along with possible research lines and future versions improvements

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Investigation of delay jitter of heterogeneous traffic in broadband networks

    Get PDF
    Scope and Methodology of Study: A critical challenge for both wired and wireless networking vendors and carrier companies is to be able to accurately estimate the quality of service (QoS) that will be provided based on the network architecture, router/switch topology, and protocol applied. As a result, this thesis focuses on the theoretical analysis of QoS parameters in term of inter-arrival jitter in differentiated services networks by deploying analytic/mathematical modeling technique and queueing theory, where the analytic model is expressed in terms of a set of equations that can be solved to yield the desired delay jitter parameter. In wireless networks with homogeneous traffic, the effects on the delay jitter in reference to the priority control scheme of the ARQ traffic for the two cases of: 1) the ARQ traffic has a priority over the original transmission traffic; and 2) the ARQ traffic has no priority over the original transmission traffic are evaluated. In wired broadband networks with heterogeneous traffic, the jitter analysis is conducted and the algorithm to control its effect is also developed.Findings and Conclusions: First, the results show that high priority packets always maintain the minimum inter-arrival jitter, which will not be affected even in heavy load situation. Second, the Gaussian traffic modeling is applied using the MVA approach to conduct the queue length analysis, and then the jitter analysis in heterogeneous broadband networks is investigated. While for wireless networks with homogeneous traffic, binomial distribution is used to conduct the queue length analysis, which is sufficient and relatively easy compared to heterogeneous traffic. Third, develop a service discipline called the tagged stream adaptive distortion-reducing peak output-rate enforcing to control and avoid the delay jitter increases without bound in heterogeneous broadband networks. Finally, through the analysis provided, the differential services, was proved not only viable, but also effective to control delay jitter. The analytic models that serve as guidelines to assist network system designers in controlling the QoS requested by customer in term of delay jitter

    Cross-layer RaCM design for vertically integrated wireless networks

    Get PDF
    Includes bibliographical references (p. 70-74).Wireless local and metropolitan area network (WLAN/WMAN) technologies, more specifically IEEE 802.11 (or wireless fidelity, WiFi) and IEEE 802.16 (or wireless interoperability for microwave access, WiMAX), are well-suited to enterprise networking since wireless offers the advantages of rapid deployment in places that are difficult to wire. However, these networking standards are relatively young with respect to their traditional mature high-speed low-latency fixed-line networking counterparts. It is more challenging for the network provider to supply the necessary quality of service (QoS) to support the variety of existing multimedia services over wireless technology. Wireless communication is also unreliable in nature, making the provisioning of agreed QoS even more challenging. Considering the advantages and disadvantages, wireless networks prove well-suited to connecting rural areas to the Internet or as a networking solution for areas that are difficult to wire. The focus of this study specifically pertains to IEEE 802.16 and the part it plays in an IEEE vertically integrated wireless Internet (WIN): IEEE 802.16 is a wireless broadband backhaul technology, capable of connecting local area networks (LANs), wireless or fixed-line, to the Internet via a high-speed fixed-line link

    Performance issues in optical burst/packet switching

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-01524-3_8This chapter summarises the activities on optical packet switching (OPS) and optical burst switching (OBS) carried out by the COST 291 partners in the last 4 years. It consists of an introduction, five sections with contributions on five different specific topics, and a final section dedicated to the conclusions. Each section contains an introductive state-of-the-art description of the specific topic and at least one contribution on that topic. The conclusions give some points on the current situation of the OPS/OBS paradigms

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Radio Resource Management Optimization For Next Generation Wireless Networks

    Get PDF
    The prominent versatility of today’s mobile broadband services and the rapid advancements in the cellular phones industry have led to a tremendous expansion in the wireless market volume. Despite the continuous progress in the radio-access technologies to cope with that expansion, many challenges still remain that need to be addressed by both the research and industrial sectors. One of the many remaining challenges is the efficient allocation and management of wireless network resources when using the latest cellular radio technologies (e.g., 4G). The importance of the problem stems from the scarcity of the wireless spectral resources, the large number of users sharing these resources, the dynamic behavior of generated traffic, and the stochastic nature of wireless channels. These limitations are further tightened as the provider’s commitment to high quality-of-service (QoS) levels especially data rate, delay and delay jitter besides the system’s spectral and energy efficiencies. In this dissertation, we strive to solve this problem by presenting novel cross-layer resource allocation schemes to address the efficient utilization of available resources versus QoS challenges using various optimization techniques. The main objective of this dissertation is to propose a new predictive resource allocation methodology using an agile ray tracing (RT) channel prediction approach. It is divided into two parts. The first part deals with the theoretical and implementational aspects of the ray tracing prediction model, and its validation. In the second part, a novel RT-based scheduling system within the evolving cloud radio access network (C-RAN) architecture is proposed. The impact of the proposed model on addressing the long term evolution (LTE) network limitations is then rigorously investigated in the form of optimization problems. The main contributions of this dissertation encompass the design of several heuristic solutions based on our novel RT-based scheduling model, developed to meet the aforementioned objectives while considering the co-existing limitations in the context of LTE networks. Both analytical and numerical methods are used within this thesis framework. Theoretical results are validated with numerical simulations. The obtained results demonstrate the effectiveness of our proposed solutions to meet the objectives subject to limitations and constraints compared to other published works

    Renegotiation based dynamic bandwidth allocation for selfsimilar VBR traffic

    Get PDF
    The provision of QoS to applications traffic depends heavily on how different traffic types are categorized and classified, and how the prioritization of these applications are managed. Bandwidth is the most scarce network resource. Therefore, there is a need for a method or system that distributes an available bandwidth in a network among different applications in such a way that each class or type of traffic receives their constraint QoS requirements. In this dissertation, a new renegotiation based dynamic resource allocation method for variable bit rate (VBR) traffic is presented. First, pros and cons of available off-line methods that are used to estimate selfsimilarity level (represented by Hurst parameter) of a VBR traffic trace are empirically investigated, and criteria to select measurement parameters for online resource management are developed. It is shown that wavelet analysis based methods are the strongest tools in estimation of Hurst parameter with their low computational complexities, compared to the variance-time method and R/S pox plot. Therefore, a temporal energy distribution of a traffic data arrival counting process among different frequency sub-bands is considered as a traffic descriptor, and then a robust traffic rate predictor is developed by using the Haar wavelet analysis. The empirical results show that the new on-line dynamic bandwidth allocation scheme for VBR traffic is superior to traditional dynamic bandwidth allocation methods that are based on adaptive algorithms such as Least Mean Square, Recursive Least Square, and Mean Square Error etc. in terms of high utilization and low queuing delay. Also a method is developed to minimize the number of bandwidth renegotiations to decrease signaling costs on traffic schedulers (e.g. WFQ) and networks (e.g. ATM). It is also quantified that the introduced renegotiation based bandwidth management scheme decreases heavytailedness of queue size distributions, which is an inherent impact of traffic self similarity. The new design increases the achieved utilization levels in the literature, provisions given queue size constraints and minimizes the number of renegotiations simultaneously. This renegotiation -based design is online and practically embeddable into QoS management blocks, edge routers and Digital Subscriber Lines Access Multiplexers (DSLAM) and rate adaptive DSL modems
    • …
    corecore