988 research outputs found

    Cross-layer RaCM design for vertically integrated wireless networks

    Get PDF
    Includes bibliographical references (p. 70-74).Wireless local and metropolitan area network (WLAN/WMAN) technologies, more specifically IEEE 802.11 (or wireless fidelity, WiFi) and IEEE 802.16 (or wireless interoperability for microwave access, WiMAX), are well-suited to enterprise networking since wireless offers the advantages of rapid deployment in places that are difficult to wire. However, these networking standards are relatively young with respect to their traditional mature high-speed low-latency fixed-line networking counterparts. It is more challenging for the network provider to supply the necessary quality of service (QoS) to support the variety of existing multimedia services over wireless technology. Wireless communication is also unreliable in nature, making the provisioning of agreed QoS even more challenging. Considering the advantages and disadvantages, wireless networks prove well-suited to connecting rural areas to the Internet or as a networking solution for areas that are difficult to wire. The focus of this study specifically pertains to IEEE 802.16 and the part it plays in an IEEE vertically integrated wireless Internet (WIN): IEEE 802.16 is a wireless broadband backhaul technology, capable of connecting local area networks (LANs), wireless or fixed-line, to the Internet via a high-speed fixed-line link

    Radio Resource Management Optimization For Next Generation Wireless Networks

    Get PDF
    The prominent versatility of today’s mobile broadband services and the rapid advancements in the cellular phones industry have led to a tremendous expansion in the wireless market volume. Despite the continuous progress in the radio-access technologies to cope with that expansion, many challenges still remain that need to be addressed by both the research and industrial sectors. One of the many remaining challenges is the efficient allocation and management of wireless network resources when using the latest cellular radio technologies (e.g., 4G). The importance of the problem stems from the scarcity of the wireless spectral resources, the large number of users sharing these resources, the dynamic behavior of generated traffic, and the stochastic nature of wireless channels. These limitations are further tightened as the provider’s commitment to high quality-of-service (QoS) levels especially data rate, delay and delay jitter besides the system’s spectral and energy efficiencies. In this dissertation, we strive to solve this problem by presenting novel cross-layer resource allocation schemes to address the efficient utilization of available resources versus QoS challenges using various optimization techniques. The main objective of this dissertation is to propose a new predictive resource allocation methodology using an agile ray tracing (RT) channel prediction approach. It is divided into two parts. The first part deals with the theoretical and implementational aspects of the ray tracing prediction model, and its validation. In the second part, a novel RT-based scheduling system within the evolving cloud radio access network (C-RAN) architecture is proposed. The impact of the proposed model on addressing the long term evolution (LTE) network limitations is then rigorously investigated in the form of optimization problems. The main contributions of this dissertation encompass the design of several heuristic solutions based on our novel RT-based scheduling model, developed to meet the aforementioned objectives while considering the co-existing limitations in the context of LTE networks. Both analytical and numerical methods are used within this thesis framework. Theoretical results are validated with numerical simulations. The obtained results demonstrate the effectiveness of our proposed solutions to meet the objectives subject to limitations and constraints compared to other published works

    Scheduling in 5G networks : Developing a 5G cell capacity simulator.

    Get PDF
    La quinta generación de comunicaciones móviles (5G) se está convirtiendo en una realidad gracias a la nueva tecnología 3GPP (3rd Generation Partnership Project) diseñada para cumplir con una amplia gama de requerimientos. Por un lado, debe poder soportar altas velocidades y servicios de latencia ultra-baja, y por otro lado, debe poder conectar una gran cantidad de dispositivos con requerimientos laxos de ancho de banda y retardo. Esta diversidad de requerimientos de servicio exige un alto grado de flexibilidad en el diseño de la interfaz de radio. Dado que la tecnología LTE (Long Term Evolution) se diseñó originalmente teniendo en cuenta la evolución de los servicios de banda ancha móvil, no proporciona suficiente flexibilidad para multiplexar de manera óptima los diferentes tipos de servicios previstos por 5G. Esto se debe a que no existe una única configuración de interfaz de radio capaz de adaptarse a todos los diferentes requisitos de servicio. Como consecuencia, las redes 5G se están diseñando para admitir diferentes configuraciones de interfaz de radio y mecanismos para multiplexar estos diferentes servicios con diferentes configuraciones en el mismo espectro disponible. Este concepto se conoce como Network Slicing y es una característica clave de 5G que debe ser soportada extremo a extremo en la red (acceso, transporte y núcleo). De esta manera, las Redes de Acceso (RAN) 5G agregarán el problema de asignación de recursos para diferentes servicios al problema tradicional de asignación de recursos a distintos usuarios. En este contexto, como el estándar no describe cómo debe ser la asignación de recursos para usuarios y servicios (quedando libre a la implementación de los proveedores) se abre un amplio campo de investigación. Se han desarrollado diferentes herramientas de simulación con fines de investigación durante los últimos años. Sin embargo, como no muchas de estas son libres, fáciles de usar y particularmente ninguna de las disponibles soporta Network Slicing a nivel de Red de Acceso, este trabajo presenta un nuevo simulador como principal contribución. Py5cheSim es un simulador simple, flexible y de código abierto basado en Python y especialmente orientado a probar diferentes algoritmos de scheduling para diferentes tipos de servicios 5G mediante una implementación simple de la funcionalidad RAN Slicing. Su arquitectura permite desarrollar e integrar nuevos algoritmos para asignación de recursos de forma sencilla y directa. Además, el uso de Python proporciona suficiente versatilidad para incluso utilizar herramientas de Inteligencia Artificial para el desarrollo de nuevos algoritmos. Este trabajo presenta los principales conceptos de diseño de las redes de acceso 5G que se tomaron como base para desarrollar la herramienta de simulación. También describe decisiones de diseño e implementación, seguidas de las pruebas de validación ejecutadas y sus principales resultados. Se presentan además algunos ejemplos de casos de uso para mostrar el potencial de la herramienta desarrollada, proporcionando un análisis primario de los algoritmos tradicionales de asignación de recursos para los nuevos tipos de servicios previstos por la tecnología. Finalmente se concluye sobre la contribución de la herramienta desarrollada, los resultados de los ejemplos incluyendo posibles líneas de investigación junto con posibles mejoras para futuras versiones.The fifth generation of mobile communications (5G) is already becoming a reality by the new 3GPP (3rd Generation Partnership Project) technology designed to solve a wide range of requirements. On the one hand, it must be able to support high bit rates and ultra-low latency services, and on the other hand, it should be able to connect a massive amount of devices with loose band width and delay requirements. Such diversity in terms of service requirements demands a high degree of flexibility in radio interface design. As LTE (Long Term Evolution) technology was originally designed with Mobile Broadband (MBB) services evolution in mind it does not provide enough flexibility to multiplex optimally the different types of services envisioned by 5G. This is because there is not a unique radio interface configuration able to fit all the different service requirements. As a consequence, 5G networks are being designed to support different radio interface configurations and mechanisms to multiplex these different services with different configurations in the same available spectrum. This concept is known as Network Slicing, and isa 5G key feature which needs to be supported end to end in the network (Radio Access, Transport and Core Network). In this way 5G Radio Access Networks (RAN) will add the resource allocation for different services problem to the user resource allocation traditional one. In this context, as both users and services scheduling is being left to vendor implementation by the standard, an extensive field of research is open. Different simulation tools have been developed for research purposes during the last years. However, as not so many of them are free, easy to use, and particularly none of the available ones supports Network Slicing at RAN level, this work presents a new simulator as its main contribution. Py5cheSim is a simple, flexible and open-source simulator based on Pythonand specially oriented to test different scheduling algorithms for 5G different types of services through a simple implementation of RAN Slicing feature. Its architecture allows to develop and integrate new scheduling algorithms in a easy and straight forward way. Furthermore, the use of Python provides enough versatility to even use Machine Learning tools for the development of new scheduling algorithms. The present work introduces the main 5G RAN design concepts which were taken as a baseline to develop the simulation tool. It also describes its design and implementation choices followed by the executed validation tests and its main results. Additionally this work presents a few use cases examples to show the developed tool’s potential providing a primary analysis of traditional scheduling algorithms for the new types of services envisioned by the technology. Finally it concludes about the developed tool contribution, the example results along with possible research lines and future versions improvements

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    End-to-End Simulation of 5G mmWave Networks

    Full text link
    Due to its potential for multi-gigabit and low latency wireless links, millimeter wave (mmWave) technology is expected to play a central role in 5th generation cellular systems. While there has been considerable progress in understanding the mmWave physical layer, innovations will be required at all layers of the protocol stack, in both the access and the core network. Discrete-event network simulation is essential for end-to-end, cross-layer research and development. This paper provides a tutorial on a recently developed full-stack mmWave module integrated into the widely used open-source ns--3 simulator. The module includes a number of detailed statistical channel models as well as the ability to incorporate real measurements or ray-tracing data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and highly customizable, making it easy to integrate algorithms or compare Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example. The module is interfaced with the core network of the ns--3 Long Term Evolution (LTE) module for full-stack simulations of end-to-end connectivity, and advanced architectural features, such as dual-connectivity, are also available. To facilitate the understanding of the module, and verify its correct functioning, we provide several examples that show the performance of the custom mmWave stack as well as custom congestion control algorithms designed specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and Tutorials (revised Jan. 2018

    Investigation of delay jitter of heterogeneous traffic in broadband networks

    Get PDF
    Scope and Methodology of Study: A critical challenge for both wired and wireless networking vendors and carrier companies is to be able to accurately estimate the quality of service (QoS) that will be provided based on the network architecture, router/switch topology, and protocol applied. As a result, this thesis focuses on the theoretical analysis of QoS parameters in term of inter-arrival jitter in differentiated services networks by deploying analytic/mathematical modeling technique and queueing theory, where the analytic model is expressed in terms of a set of equations that can be solved to yield the desired delay jitter parameter. In wireless networks with homogeneous traffic, the effects on the delay jitter in reference to the priority control scheme of the ARQ traffic for the two cases of: 1) the ARQ traffic has a priority over the original transmission traffic; and 2) the ARQ traffic has no priority over the original transmission traffic are evaluated. In wired broadband networks with heterogeneous traffic, the jitter analysis is conducted and the algorithm to control its effect is also developed.Findings and Conclusions: First, the results show that high priority packets always maintain the minimum inter-arrival jitter, which will not be affected even in heavy load situation. Second, the Gaussian traffic modeling is applied using the MVA approach to conduct the queue length analysis, and then the jitter analysis in heterogeneous broadband networks is investigated. While for wireless networks with homogeneous traffic, binomial distribution is used to conduct the queue length analysis, which is sufficient and relatively easy compared to heterogeneous traffic. Third, develop a service discipline called the tagged stream adaptive distortion-reducing peak output-rate enforcing to control and avoid the delay jitter increases without bound in heterogeneous broadband networks. Finally, through the analysis provided, the differential services, was proved not only viable, but also effective to control delay jitter. The analytic models that serve as guidelines to assist network system designers in controlling the QoS requested by customer in term of delay jitter

    ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.

    Get PDF
    Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. © 2012 IEEE

    Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

    Get PDF
    With the growing demand for new blend of applications, the user’s dependency on the Internet is increasing day by day. Mobile Internet users are giving more attention to their own experience, especially in terms of communication reliability, high data rate and service stability on the move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, many researchers are finding the best approach, Carrier Aggregation (CA) is one of the newest innovations which seems to fulfil the demands of future spectrum, CA is one the most important feature for Long Term Evolution - Advanced. In direction to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements 1 Gb/s peak data rate, the CA scheme is presented by 3GPP to sustain high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues containing the aggregation structure, its implementation, deployment scenarios, control signal technique and challenges for CA technique in LTE-Advanced, with consideration backward compatibility are highlighted. Performance evaluation in macrocellular scenarios through a simulation approach shows the benefits of applying CA and low-complexity multi-band schedulers in service quality and system capacity enhancement. The Enhanced multi-band scheduler is less complex than the General multi-band scheduler and performs better for cell radius longer than 1800 m (and a PLR threshold of 2%).This work is funded by FCT/MCTES through national funds and when applicable co-funded EU funds under the project UIDB/EEA/50008/2020, COST CA 15104 IRACON, ORCIP and CONQUEST (CMU/ECE/0030/2017), TeamUp5G project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie project number 813391.info:eu-repo/semantics/acceptedVersio
    • …
    corecore