2,403 research outputs found

    Network Flow Optimization Using Reinforcement Learning

    Get PDF

    Routing optimization algorithms in integrated fronthaul/backhaul networks supporting multitenancy

    Get PDF
    Mención Internacional en el título de doctorEsta tesis pretende ayudar en la definición y el diseño de la quinta generación de redes de telecomunicaciones (5G) a través del modelado matemático de las diferentes cualidades que las caracterizan. En general, la ambición de estos modelos es realizar una optimización de las redes, ensalzando sus capacidades recientemente adquiridas para mejorar la eficiencia de los futuros despliegues tanto para los usuarios como para los operadores. El periodo de realización de esta tesis se corresponde con el periodo de investigación y definición de las redes 5G, y, por lo tanto, en paralelo y en el contexto de varios proyectos europeos del programa H2020. Por lo tanto, las diferentes partes del trabajo presentado en este documento cuadran y ofrecen una solución a diferentes retos que han ido apareciendo durante la definición del 5G y dentro del ámbito de estos proyectos, considerando los comentarios y problemas desde el punto de vista de todos los usuarios finales, operadores y proveedores. Así, el primer reto a considerar se centra en el núcleo de la red, en particular en cómo integrar tráfico fronthaul y backhaul en el mismo estrato de transporte. La solución propuesta es un marco de optimización para el enrutado y la colocación de recursos que ha sido desarrollado teniendo en cuenta restricciones de retardo, capacidad y caminos, maximizando el grado de despliegue de Unidades Distribuidas (DU) mientras se minimizan los agregados de las Unidades Centrales (CU) que las soportan. El marco y los algoritmos heurísticos desarrollados (para reducir la complexidad computacional) son validados y aplicados a redes tanto a pequeña como a gran (nivel de producción) escala. Esto los hace útiles para los operadores de redes tanto para la planificación de la red como para el ajuste dinámico de las operaciones de red en su infraestructura (virtualizada). Moviéndonos más cerca de los usuarios, el segundo reto considerado se centra en la colocación de servicios en entornos de nube y borde (cloud/edge). En particular, el problema considerado consiste en seleccionar la mejor localización para cada función de red virtual (VNF) que compone un servicio en entornos de robots en la nube, que implica restricciones estrictas en las cotas de retardo y fiabilidad. Los robots, vehículos y otros dispositivos finales proveen competencias significativas como impulsores, sensores y computación local que son esenciales para algunos servicios. Por contra, estos dispositivos están en continuo movimiento y pueden perder la conexión con la red o quedarse sin batería, cosa que reta aún más la entrega de servicios en este entorno dinámico. Así, el análisis realizado y la solución propuesta abordan las restricciones de movilidad y batería. Además, también se necesita tener en cuenta los aspectos temporales y los objetivos conflictivos de fiabilidad y baja latencia en el despliegue de servicios en una red volátil, donde los nodos de cómputo móviles actúan como una extensión de la infraestructura de cómputo de la nube y el borde. El problema se formula como un problema de optimización para colocación de VNFs minimizando el coste y también se propone un heurístico eficiente. Los algoritmos son evaluados de forma extensiva desde varios aspectos por simulación en escenarios que reflejan la realidad de forma detallada. Finalmente, el último reto analizado se centra en dar soporte a servicios basados en el borde, en particular, aprendizaje automático (ML) en escenarios del Internet de las Cosas (IoT) distribuidos. El enfoque tradicional al ML distribuido se centra en adaptar los algoritmos de aprendizaje a la red, por ejemplo, reduciendo las actualizaciones para frenar la sobrecarga. Las redes basadas en el borde inteligente, en cambio, hacen posible seguir un enfoque opuesto, es decir, definir la topología de red lógica alrededor de la tarea de aprendizaje a realizar, para así alcanzar el resultado de aprendizaje deseado. La solución propuesta incluye un modelo de sistema que captura dichos aspectos en el contexto de ML supervisado, teniendo en cuenta tanto nodos de aprendizaje (que realizan las computaciones) como nodos de información (que proveen datos). El problema se formula para seleccionar (i) qué nodos de aprendizaje e información deben cooperar para completar la tarea de aprendizaje, y (ii) el número de iteraciones a realizar, para minimizar el coste de aprendizaje mientras se garantizan los objetivos de error predictivo y tiempo de ejecución. La solución también incluye un algoritmo heurístico que es evaluado ensalzando una topología de red real y considerando tanto las tareas de clasificación como de regresión, y cuya solución se acerca mucho al óptimo, superando las soluciones alternativas encontradas en la literatura.This thesis aims to help in the definition and design of the 5th generation of telecommunications networks (5G) by modelling the different features that characterize them through several mathematical models. Overall, the aim of these models is to perform a wide optimization of the network elements, leveraging their newly-acquired capabilities in order to improve the efficiency of the future deployments both for the users and the operators. The timeline of this thesis corresponds to the timeline of the research and definition of 5G networks, and thus in parallel and in the context of several European H2020 programs. Hence, the different parts of the work presented in this document match and provide a solution to different challenges that have been appearing during the definition of 5G and within the scope of those projects, considering the feedback and problems from the point of view of all the end users, operators and providers. Thus, the first challenge to be considered focuses on the core network, in particular on how to integrate fronthaul and backhaul traffic over the same transport stratum. The solution proposed is an optimization framework for routing and resource placement that has been developed taking into account delay, capacity and path constraints, maximizing the degree of Distributed Unit (DU) deployment while minimizing the supporting Central Unit (CU) pools. The framework and the developed heuristics (to reduce the computational complexity) are validated and applied to both small and largescale (production-level) networks. They can be useful to network operators for both network planning as well as network operation adjusting their (virtualized) infrastructure dynamically. Moving closer to the user side, the second challenge considered focuses on the allocation of services in cloud/edge environments. In particular, the problem tackled consists of selecting the best the location of each Virtual Network Function (VNF) that compose a service in cloud robotics environments, that imply strict delay bounds and reliability constraints. Robots, vehicles and other end-devices provide significant capabilities such as actuators, sensors and local computation which are essential for some services. On the negative side, these devices are continuously on the move and might lose network connection or run out of battery, which further challenge service delivery in this dynamic environment. Thus, the performed analysis and proposed solution tackle the mobility and battery restrictions. We further need to account for the temporal aspects and conflicting goals of reliable, low latency service deployment over a volatile network, where mobile compute nodes act as an extension of the cloud and edge computing infrastructure. The problem is formulated as a cost-minimizing VNF placement optimization and an efficient heuristic is proposed. The algorithms are extensively evaluated from various aspects by simulation on detailed real-world scenarios. Finally, the last challenge analyzed focuses on supporting edge-based services, in particular, Machine Learning (ML) in distributed Internet of Things (IoT) scenarios. The traditional approach to distributed ML is to adapt learning algorithms to the network, e.g., reducing updates to curb overhead. Networks based on intelligent edge, instead, make it possible to follow the opposite approach, i.e., to define the logical network topology around the learning task to perform, so as to meet the desired learning performance. The proposed solution includes a system model that captures such aspects in the context of supervised ML, accounting for both learning nodes (that perform computations) and information nodes (that provide data). The problem is formulated to select (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time. The solution also includes an heuristic algorithm that is evaluated leveraging a real-world network topology and considering both classification and regression tasks, and closely matches the optimum, outperforming state-of-the-art alternatives.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Pablo Serrano Yáñez-Mingot.- Secretario: Andrés García Saavedra.- Vocal: Luca Valcarengh

    Optimization of locations of diffusion spots in indoor optical wireless local area networks

    Get PDF
    In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users’ performance uniformity is addressed by using the CFO algorithm, and adopting different objective function’s configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function’s weights on the system and the users’ performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers’ performance

    An efficient design space exploration framework to optimize power-efficient heterogeneous many-core multi-threading embedded processor architectures

    Get PDF
    By the middle of this decade, uniprocessor architecture performance had hit a roadblock due to a combination of factors, such as excessive power dissipation due to high operating frequencies, growing memory access latencies, diminishing returns on deeper instruction pipelines, and a saturation of available instruction level parallelism in applications. An attractive and viable alternative embraced by all the processor vendors was multi-core architectures where throughput is improved by using micro-architectural features such as multiple processor cores, interconnects and low latency shared caches integrated on a single chip. The individual cores are often simpler than uniprocessor counterparts, use hardware multi-threading to exploit thread-level parallelism and latency hiding and typically achieve better performance-power figures. The overwhelming success of the multi-core microprocessors in both high performance and embedded computing platforms motivated chip architects to dramatically scale the multi-core processors to many-cores which will include hundreds of cores on-chip to further improve throughput. With such complex large scale architectures however, several key design issues need to be addressed. First, a wide range of micro- architectural parameters such as L1 caches, load/store queues, shared cache structures and interconnection topologies and non-linear interactions between them define a vast non-linear multi-variate micro-architectural design space of many-core processors; the traditional method of using extensive in-loop simulation to explore the design space is simply not practical. Second, to accurately evaluate the performance (measured in terms of cycles per instruction (CPI)) of a candidate design, the contention at the shared cache must be accounted in addition to cycle-by-cycle behavior of the large number of cores which superlinearly increases the number of simulation cycles per iteration of the design exploration. Third, single thread performance does not scale linearly with number of hardware threads per core and number of cores due to memory wall effect. This means that at every step of the design process designers must ensure that single thread performance is not unacceptably slowed down while increasing overall throughput. While all these factors affect design decisions in both high performance and embedded many-core processors, the design of embedded processors required for complex embedded applications such as networking, smart power grids, battlefield decision-making, consumer electronics and biomedical devices to name a few, is fundamentally different from its high performance counterpart because of the need to consider (i) low power and (ii) real-time operations. This implies the design objective for embedded many-core processors cannot be to simply maximize performance, but improve it in such a way that overall power dissipation is minimized and all real-time constraints are met. This necessitates additional power estimation models right at the design stage to accurately measure the cost and reliability of all the candidate designs during the exploration phase. In this dissertation, a statistical machine learning (SML) based design exploration framework is presented which employs an execution-driven cycle- accurate simulator to accurately measure power and performance of embedded many-core processors. The embedded many-core processor domain is Network Processors (NePs) used to processed network IP packets. Future generation NePs required to operate at terabits per second network speeds captures all the aspects of a complex embedded application consisting of shared data structures, large volume of compute-intensive and data-intensive real-time bound tasks and a high level of task (packet) level parallelism. Statistical machine learning (SML) is used to efficiently model performance and power of candidate designs in terms of wide ranges of micro-architectural parameters. The method inherently minimizes number of in-loop simulations in the exploration framework and also efficiently captures the non-linear interactions between the micro-architectural design parameters. To ensure scalability, the design space is partitioned into (i) core-level micro-architectural parameters to optimize single core architectures subject to the real-time constraints and (ii) shared memory level micro- architectural parameters to explore the shared interconnection network and shared cache memory architectures and achieves overall optimality. The cost function of our exploration algorithm is the total power dissipation which is minimized, subject to the constraints of real-time throughput (as determined from the terabit optical network router line-speed) required in IP packet processing embedded application
    corecore