764 research outputs found

    Study and application of spectral monitoring techniques for optical network optimization

    Get PDF
    One of the possible ways to address the constantly increasing amount of heterogeneous and variable internet traffic is the evolution of the current optical networks towards a more flexible, open, and disaggregated paradigm. In such scenarios, the role played by Optical Performance Monitoring (OPM) is fundamental. In fact, OPM allows to balance performance and specification mismatches resulting from the disaggregation adoption and provides the control plane with the necessary feedback to grant the optical networks an adequate automation level. Therefore, new flexible and cost-effective OPM solutions are needed, as well as novel techniques to extract the desired information from the monitored data and process and apply them. In this dissertation, we focus on three aspects related to OPM. We first study a monitoring data plane scheme to acquire the high resolution signal optical spectra in a nonintrusive way. In particular, we propose a coherent detection based Optical Spectrum Analyzer (OSA) enhanced with specific Digital Signal Processing (DSP) to detect spectral slices of the considered optical signals. Then, we identify two main placement strategies for such monitoring solutions, enhancing them using two spectral processing techniques to estimate signal- and optical filter-related parameters. Specifically, we propose a way to estimate the Amplified Spontaneous Emission (ASE) noise or its related Optical Signal-to-Noise (OSNR) using optical spectra acquired at the egress ports of the network nodes and the filter central frequency and 3/6 dB bandwidth, using spectra captured at the ingress ports of the network nodes. To do so, we leverage Machine Learning (ML) algorithms and the function fitting principle, according to the considered scenario. We validate both the monitoring strategies and their related processing techniques through simulations and experiments. The obtained results confirm the validity of the two proposed estimation approaches. In particular, we are able to estimate in-band the OSNR/ASE noise within an egress monitor placement scenario, with a Maximum Absolute Error (MAE) lower than 0.4 dB. Moreover, we are able to estimate the filter central frequency and 3/6 dB bandwidth, within an ingress optical monitor placement scenario, with a MAE lower than 0.5 GHz and 0.98 GHz, respectively. Based on such evaluations, we also compare the two placement scenarios and provide guidelines on their implementation. According to the analysis of specific figures of merit, such as the estimation of the Signal-to-Noise Ratio (SNR) penalty introduced by an optical filter, we identify the ingress monitoring strategy as the most promising. In fact, when compared to scenarios where no monitoring strategy is adopted, the ingress one reduced the SNR penalty estimation by 92%. Finally, we identify a potential application for the monitored information. Specifically, we propose a solution for the optimization of the subchannel spectral spacing in a superchannel. Leveraging convex optimization methods, we implement a closed control loop process for the dynamical reconfiguration of the subchannel central frequencies to optimize specific Quality of Transmission (QoT)-related metrics. Such a solution is based on the information monitored at the superchannel receiver side. In particular, to make all the subchannels feasible, we consider the maximization of the total superchannel capacity and the maximization of the minimum superchannel subchannel SNR value. We validate the proposed approach using simulations, assuming scenarios with different subchannel numbers, signal characteristics, and starting frequency values. The obtained results confirm the effectiveness of our solution. Specifically, compared with the equally spaced subchannel scenario, we are able to improve the total and the minimum subchannel SNR values of a four subchannel superchannel, of 1.45 dB and 1.19 dB, respectively.Una de las posibles formas de hacer frente a la creciente cantidad de tráfico heterogéneo y variable de Internet es la evolución de las actuales redes ópticas hacia un paradigma más flexible, abierto y desagregado. En estos escenarios, el papel que desempeña el modulo óptico de monitorización de prestaciones (OPM) es fundamental. De hecho, el OPM permite equilibrar los desajustes de rendimiento y especificación, los cuales surgen con la adopción de la desagregación; del mismo modo el OPM también proporciona al plano de control la realimentación necesaria para otorgar un nivel de automatización adecuado a las redes ópticas. En esta tesis, nos centramos en tres aspectos relacionados con el OPM. En primer lugar, estudiamos un esquema de monitorización para adquirir, de forma no intrusiva, los espectros ópticos de señales de alta resolución. En concreto, proponemos un analizador de espectro óptico (OSA) basado en detección coherente y mejorado con un específico procesado digital de señal (DSP) para detectar cortes espectrales de las señales ópticas consideradas. A continuación, presentamos dos técnicas de colocación para dichas soluciones de monitorización, mejorándolas mediante dos técnicas de procesamiento espectral para estimar los parámetros relacionados con la señal y el filtro óptico. Específicamente, proponemos un método para estimar el ruido de emisión espontánea amplificada (ASE), o la relación de señal-ruido óptica (OSNR), utilizando espectros ópticos adquiridos en los puertos de salida de los nodos de la red. Del mismo modo, estimamos la frecuencia central del filtro y el ancho de banda de 3/6 dB, utilizando espectros capturados en los puertos de entrada de los nodos de la red. Para ello, aprovechamos los algoritmos de Machine Learning (ML) y el principio de function fitting, según el escenario considerado. Validamos tanto las estrategias de monitorización como las técnicas de procesamiento mediante simulaciones y experimentos. Se puede estimar en banda el ruido ASE/OSNR en un escenario de colocación de monitores de salida, con un Maximum Absolute Error (MAE) inferior a 0.4 dB. Además, se puede estimar la frecuencia central del filtro y el ancho de banda de 3/6 dB, dentro de un escenario de colocación de monitores ópticos de entrada, con un MAE inferior a 0.5 GHz y 0.98 GHz, respectivamente. A partir de estas evaluaciones, también comparamos los dos escenarios de colocación y proporcionamos directrices sobre su aplicación. Según el análisis de específicas figuras de mérito, como la estimación de la penalización de la relación señal-ruido (SNR) introducida por un filtro óptico, demostramos que la estrategia de monitorización de entrada es la más prometedora. De hecho, utilizar un sistema de monitorización de entrada redujo la estimación de la penalización del SNR en un 92%. Por último, identificamos una posible aplicación para la información monitorizada. En concreto, proponemos una solución para la optimización del espaciado espectral de los subcanales en un supercanal. Aprovechando los métodos de optimización convexa, implementamos un proceso cíclico de control cerrado para la reconfiguración dinámica de las frecuencias centrales de los subcanales con el fin de optimizar métricas específicas relacionadas con la calidad de la transmisión (QoT). Esta solución se basa en la información monitorizada en el lado del receptor del supercanal. Validamos el enfoque propuesto mediante simulaciones, asumiendo escenarios con un diferente número de subcanales, distintas características de la señal, y diversos valores de la frecuencia inicial. Los resultados obtenidos confirman la eficacia de nuestra solución. Más específicatamente, en comparación con el escenario de subcanales igualmente espaciados, se pueden mejorar los valores totales y minimos de SNR de los subcanales de un supercanal de cuatro subcanales, de 1.45 dB y 1.19 dB, respectivamentePostprint (published version

    Deep learning applied to 2D video data for the estimation of clamp reaction forces acting on running prosthetic feet and experimental validation after bench and track tests

    Get PDF
    Carbon fiber Running Specific Prostheses (RSPs) have allowed athletes with lower extremity amputations to recover their functional capability of running. RSPs are designed to replicate the spring-like nature of biological legs: they are passive components that mimic the tendons elastic potential energy storage and release during ground contact. The knowledge of loads acting on the prosthesis is crucial for evaluating athletes’ running technique, prevent injuries and designing Running Prosthetic Feet (RPF). The aim of the present work is to investigate a method to estimate forces acting on a RPF based on its geometrical configuration. Firstly, the use of kinematic data acquired with 2D videos was assessed, to understand if they can be a good approximation to the golden standard represented by motion capture (MOCAP). This was done by evaluating steps acquired during two running sessions (OS1 and OS3) with elite paralympic athletes. Then, the problem was formulated using a deep learning approach, training a neural network over data collected from in vitro bench tests, carried out on a hydraulic test bench. Two models were built: the first one was trained over data from standard procedures and validated on two steps of OS1; then, in order to improve the performance of the prototype, a second model was built and trained with data from newly studied procedures. It was then validated on three steps from OS3.Carbon fiber Running Specific Prostheses (RSPs) have allowed athletes with lower extremity amputations to recover their functional capability of running. RSPs are designed to replicate the spring-like nature of biological legs: they are passive components that mimic the tendons elastic potential energy storage and release during ground contact. The knowledge of loads acting on the prosthesis is crucial for evaluating athletes’ running technique, prevent injuries and designing Running Prosthetic Feet (RPF). The aim of the present work is to investigate a method to estimate forces acting on a RPF based on its geometrical configuration. Firstly, the use of kinematic data acquired with 2D videos was assessed, to understand if they can be a good approximation to the golden standard represented by motion capture (MOCAP). This was done by evaluating steps acquired during two running sessions (OS1 and OS3) with elite paralympic athletes. Then, the problem was formulated using a deep learning approach, training a neural network over data collected from in vitro bench tests, carried out on a hydraulic test bench. Two models were built: the first one was trained over data from standard procedures and validated on two steps of OS1; then, in order to improve the performance of the prototype, a second model was built and trained with data from newly studied procedures. It was then validated on three steps from OS3

    Contributions to energy-aware demand-response systems using SDN and NFV for fog computing

    Get PDF
    Ever-increasing energy consumption, the depletion of non-renewable resources, the climate impact associated with energy generation, and finite energy-production capacity are important concerns worldwide that drive the urgent creation of new energy management and consumption schemes. In this regard, by leveraging the massive connectivity provided by emerging communications such as the 5G systems, this thesis proposes a long-term sustainable Demand-Response solution for the adaptive and efficient management of available energy consumption for Internet of Things (IoT) infrastructures, in which energy utilization is optimized based on the available supply. In the proposed approach, energy management focuses on consumer devices (e.g., appliances such as a light bulb or a screen). In this regard, by proposing that each consumer device be part of an IoT infrastructure, it is feasible to control its respective consumption. The proposal includes an architecture that uses Network Functions Virtualization (NFV) and Software Defined Networking technologies as enablers to promote the primary use of energy from renewable sources. Associated with architecture, this thesis presents a novel consumption model conditioned on availability in which consumers are part of the management process. To efficiently use the energy from renewable and non-renewable sources, several management strategies are herein proposed, such as the prioritization of the energy supply, workload scheduling using time-shifting capabilities, and quality degradation to decrease- the power demanded by consumers if needed. The adaptive energy management solution is modeled as an Integer Linear Programming, and its complexity has been identified to be NP-Hard. To verify the improvements in energy utilization, an optimal algorithmic solution based on a brute force search has been implemented and evaluated. Because the hardness of the adaptive energy management problem and the non-polynomial growth of its optimal solution, which is limited to energy management for a small number of energy demands (e.g., 10 energy demands) and small values of management mechanisms, several faster suboptimal algorithmic strategies have been proposed and implemented. In this context, at the first stage, we implemented three heuristic strategies: a greedy strategy (GreedyTs), a genetic-algorithm-based solution (GATs), and a dynamic programming approach (DPTs). Then, we incorporated into both the optimal and heuristic strategies a prepartitioning method in which the total set of analyzed services is divided into subsets of smaller size and complexity that are solved iteratively. As a result of the adaptive energy management in this thesis, we present eight strategies, one timal and seven heuristic, that when deployed in communications infrastructures such as the NFV domain, seek the best possible scheduling of demands, which lead to efficient energy utilization. The performance of the algorithmic strategies has been validated through extensive simulations in several scenarios, demonstrating improvements in energy consumption and the processing of energy demands. Additionally, the simulation results revealed that the heuristic approaches produce high-quality solutions close to the optimal while executing among two and seven orders of magnitude faster and with applicability to scenarios with thousands and hundreds of thousands of energy demands. This thesis also explores possible application scenarios of both the proposed architecture for adaptive energy management and algorithmic strategies. In this regard, we present some examples, including adaptive energy management in-home systems and 5G networks slicing, energy-aware management solutions for unmanned aerial vehicles, also known as drones, and applicability for the efficient allocation of spectrum in flex-grid optical networks. Finally, this thesis presents open research problems and discusses other application scenarios and future work.El constante aumento del consumo de energía, el agotamiento de los recursos no renovables, el impacto climático asociado con la generación de energía y la capacidad finita de producción de energía son preocupaciones importantes en todo el mundo que impulsan la creación urgente de nuevos esquemas de consumo y gestión de energía. Al aprovechar la conectividad masiva que brindan las comunicaciones emergentes como los sistemas 5G, esta tesis propone una solución de Respuesta a la Demanda sostenible a largo plazo para la gestión adaptativa y eficiente del consumo de energía disponible para las infraestructuras de Internet of Things (IoT), en el que se optimiza la utilización de la energía en función del suministro disponible. En el enfoque propuesto, la gestión de la energía se centra en los dispositivos de consumo (por ejemplo, electrodomésticos). En este sentido, al proponer que cada dispositivo de consumo sea parte de una infraestructura IoT, es factible controlar su respectivo consumo. La propuesta incluye una arquitectura que utiliza tecnologías de Network Functions Virtualization (NFV) y Software Defined Networking como habilitadores para promover el uso principal de energía de fuentes renovables. Asociada a la arquitectura, esta tesis presenta un modelo de consumo condicionado a la disponibilidad en el que los consumidores son parte del proceso de gestión. Para utilizar eficientemente la energía de fuentes renovables y no renovables, se proponen varias estrategias de gestión, como la priorización del suministro de energía, la programación de la carga de trabajo utilizando capacidades de cambio de tiempo y la degradación de la calidad para disminuir la potencia demandada. La solución de gestión de energía adaptativa se modela como un problema de programación lineal entera con complejidad NP-Hard. Para verificar las mejoras en la utilización de energía, se ha implementado y evaluado una solución algorítmica óptima basada en una búsqueda de fuerza bruta. Debido a la dureza del problema de gestión de energía adaptativa y el crecimiento no polinomial de su solución óptima, que se limita a la gestión de energía para un pequeño número de demandas de energía (por ejemplo, 10 demandas) y pequeños valores de los mecanismos de gestión, varias estrategias algorítmicas subóptimos más rápidos se han propuesto. En este contexto, en la primera etapa, implementamos tres estrategias heurísticas: una estrategia codiciosa (GreedyTs), una solución basada en algoritmos genéticos (GATs) y un enfoque de programación dinámica (DPTs). Luego, incorporamos tanto en la estrategia óptima como en la- heurística un método de prepartición en el que el conjunto total de servicios analizados se divide en subconjuntos de menor tamaño y complejidad que se resuelven iterativamente. Como resultado de la gestión adaptativa de la energía en esta tesis, presentamos ocho estrategias, una óptima y siete heurísticas, que cuando se despliegan en infraestructuras de comunicaciones como el dominio NFV, buscan la mejor programación posible de las demandas, que conduzcan a un uso eficiente de la energía. El desempeño de las estrategias algorítmicas ha sido validado a través de extensas simulaciones en varios escenarios, demostrando mejoras en el consumo de energía y el procesamiento de las demandas de energía. Los resultados de la simulación revelaron que los enfoques heurísticos producen soluciones de alta calidad cercanas a las óptimas mientras se ejecutan entre dos y siete órdenes de magnitud más rápido y con aplicabilidad a escenarios con miles y cientos de miles de demandas de energía. Esta tesis también explora posibles escenarios de aplicación tanto de la arquitectura propuesta para la gestión adaptativa de la energía como de las estrategias algorítmicas. En este sentido, presentamos algunos ejemplos, que incluyen sistemas de gestión de energía adaptativa en el hogar, en 5G networkPostprint (published version

    Dual-Stage Planning for Elastic Optical Networks Integrating Machine-Learning-Assisted QoT Estimation

    Get PDF
    Following the emergence of Elastic Optical Networks (EONs), Machine Learning (ML) has been intensively investigated as a promising methodology to address complex network management tasks, including, e.g., Quality of Transmission (QoT) estimation, fault management, and automatic adjustment of transmission parameters. Though several ML-based solutions for specific tasks have been proposed, how to integrate the outcome of such ML approaches inside Routing and Spectrum Assignment (RSA) models (which address the fundamental planning problem in EONs) is still an open research problem. In this study, we propose a dual-stage iterative RSA optimization framework that incorporates the QoT estimations provided by a ML regressor, used to define lightpaths' reach constraints, into a Mixed Integer Linear Programming (MILP) formulation. The first stage minimizes the overall spectrum occupation, whereas the second stage maximizes the minimum inter-channel spacing between neighbor channels, without increasing the overall spectrum occupation obtained in the previous stage. During the second stage, additional interference constraints are generated, and these constraints are then added to the MILP at the next iteration round to exclude those lightpaths combinations that would exhibit unacceptable QoT. Our illustrative numerical results on realistic EON instances show that the proposed ML-assisted framework achieves spectrum occupation savings up to 52.4% (around 33% on average) in comparison to a traditional MILP-based RSA framework that uses conservative reach constraints based on margined analytical models

    Deep learning for tomographic reconstruction with limited data

    Get PDF
    Tomography is a powerful technique to non-destructively determine the interior structure of an object.Usually, a series of projection images (e.g.\ X-ray images) is acquired from a range of different positions.from these projection images, a reconstruction of the object's interior is computed. Many advanced applications require fast acquisition, effectively limiting the number of projection images and imposing a level of noise on these images. These limitations result in artifacts (deficiencies) in the reconstructed images. Recently, deep neural networks have emerged as a powerful technique to remove these limited-data artifacts from reconstructed images, often outperformingconventional state-of-the-art techniques. To perform this task, the networks are typically trained on a dataset of paired low-quality and high-quality images of similar objects. This is a major obstacle to their use in many practical applications. In this thesis, we explore techniques to employ deep learning in advanced experiments where measuring additional objects is not possible.Financial support was provided by the Netherlands Organisation for Scientific Research (NWO), programme 639.073.506Number theory, Algebra and Geometr

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Virtualisation and resource allocation in MECEnabled metro optical networks

    Get PDF
    The appearance of new network services and the ever-increasing network traffic and number of connected devices will push the evolution of current communication networks towards the Future Internet. In the area of optical networks, wavelength routed optical networks (WRONs) are evolving to elastic optical networks (EONs) in which, thanks to the use of OFDM or Nyquist WDM, it is possible to create super-channels with custom-size bandwidth. The basic element in these networks is the lightpath, i.e., all-optical circuits between two network nodes. The establishment of lightpaths requires the selection of the route that they will follow and the portion of the spectrum to be used in order to carry the requested traffic from the source to the destination node. That problem is known as the routing and spectrum assignment (RSA) problem, and new algorithms must be proposed to address this design problem. Some early studies on elastic optical networks studied gridless scenarios, in which a slice of spectrum of variable size is assigned to a request. However, the most common approach to the spectrum allocation is to divide the spectrum into slots of fixed width and allocate multiple, consecutive spectrum slots to each lightpath, depending on the requested bandwidth. Moreover, EONs also allow the proposal of more flexible routing and spectrum assignment techniques, like the split-spectrum approach in which the request is divided into multiple "sub-lightpaths". In this thesis, four RSA algorithms are proposed combining two different levels of flexibility with the well-known k-shortest paths and first fit heuristics. After comparing the performance of those methods, a novel spectrum assignment technique, Best Gap, is proposed to overcome the inefficiencies emerged when combining the first fit heuristic with highly flexible networks. A simulation study is presented to demonstrate that, thanks to the use of Best Gap, EONs can exploit the network flexibility and reduce the blocking ratio. On the other hand, operators must face profound architectural changes to increase the adaptability and flexibility of networks and ease their management. Thanks to the use of network function virtualisation (NFV), the necessary network functions that must be applied to offer a service can be deployed as virtual appliances hosted by commodity servers, which can be located in data centres, network nodes or even end-user premises. The appearance of new computation and networking paradigms, like multi-access edge computing (MEC), may facilitate the adaptation of communication networks to the new demands. Furthermore, the use of MEC technology will enable the possibility of installing those virtual network functions (VNFs) not only at data centres (DCs) and central offices (COs), traditional hosts of VFNs, but also at the edge nodes of the network. Since data processing is performed closer to the enduser, the latency associated to each service connection request can be reduced. MEC nodes will be usually connected between them and with the DCs and COs by optical networks. In such a scenario, deploying a network service requires completing two phases: the VNF-placement, i.e., deciding the number and location of VNFs, and the VNF-chaining, i.e., connecting the VNFs that the traffic associated to a service must transverse in order to establish the connection. In the chaining process, not only the existence of VNFs with available processing capacity, but the availability of network resources must be taken into account to avoid the rejection of the connection request. Taking into consideration that the backhaul of this scenario will be usually based on WRONs or EONs, it is necessary to design the virtual topology (i.e., the set of lightpaths established in the networks) in order to transport the tra c from one node to another. The process of designing the virtual topology includes deciding the number of connections or lightpaths, allocating them a route and spectral resources, and finally grooming the traffic into the created lightpaths. Lastly, a failure in the equipment of a node in an NFV environment can cause the disruption of the SCs traversing the node. This can cause the loss of huge amounts of data and affect thousands of end-users. In consequence, it is key to provide the network with faultmanagement techniques able to guarantee the resilience of the established connections when a node fails. For the mentioned reasons, it is necessary to design orchestration algorithms which solve the VNF-placement, chaining and network resource allocation problems in 5G networks with optical backhaul. Moreover, some versions of those algorithms must also implements protection techniques to guarantee the resilience system in case of failure. This thesis makes contribution in that line. Firstly, a genetic algorithm is proposed to solve the VNF-placement and VNF-chaining problems in a 5G network with optical backhaul based on star topology: GASM (genetic algorithm for effective service mapping). Then, we propose a modification of that algorithm in order to be applied to dynamic scenarios in which the reconfiguration of the planning is allowed. Furthermore, we enhanced the modified algorithm to include a learning step, with the objective of improving the performance of the algorithm. In this thesis, we also propose an algorithm to solve not only the VNF-placement and VNF-chaining problems but also the design of the virtual topology, considering that a WRON is deployed as the backhaul network connecting MEC nodes and CO. Moreover, a version including individual VNF protection against node failure has been also proposed and the effect of using shared/dedicated and end-to-end SC/individual VNF protection schemes are also analysed. Finally, a new algorithm that solves the VNF-placement and chaining problems and the virtual topology design implementing a new chaining technique is also proposed. Its corresponding versions implementing individual VNF protection are also presented. Furthermore, since the method works with any type of WDM mesh topologies, a technoeconomic study is presented to compare the effect of using different network topologies in both the network performance and cost.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaDoctorado en Tecnologías de la Información y las Telecomunicacione

    TAP-Vid: A Benchmark for Tracking Any Point in a Video

    Full text link
    Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move. This information is useful to make inferences about 3D shape, physical properties and object interactions. While the problem of tracking arbitrary physical points on surfaces over longer video clips has received some attention, no dataset or benchmark for evaluation existed, until now. In this paper, we first formalize the problem, naming it tracking any point (TAP). We introduce a companion benchmark, TAP-Vid, which is composed of both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. Central to the construction of our benchmark is a novel semi-automatic crowdsourced pipeline which uses optical flow estimates to compensate for easier, short-term motion like camera shake, allowing annotators to focus on harder sections of video. We validate our pipeline on synthetic data and propose a simple end-to-end point tracking model TAP-Net, showing that it outperforms all prior methods on our benchmark when trained on synthetic data.Comment: Published in NeurIPS Datasets and Benchmarks track, 202

    Towards Zero Touch Next Generation Network Management

    Get PDF
    The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the front and mid-haul and backbone networking segments servicing them. One of the main changes made was virtualizing the networking components to allow for faster deployment and reconfiguration when needed. However, adopting such technologies poses several challenges, such as improving the performance and efficiency of these systems by properly orchestrating the services to the ideal edge device. A second challenge is ensuring the backbone optical networking maximizes and maintains the throughput levels under more dynamically variant conditions. A third challenge is addressing the limitation of placement techniques in O-RAN. In this thesis, we propose using various optimization modeling and machine learning techniques in three segments of network systems towards lowering the need for human intervention targeting zero-touch networking. In particular, the first part of the thesis applies optimization modeling, heuristics, and segmentation to improve the locally driven orchestration techniques, which are used to place demands on edge devices throughput to ensure efficient and resilient placement decisions. The second part of the thesis proposes using reinforcement learning (RL) techniques on a nodal base to address the dynamic nature of demands within an optical networking paradigm. The RL techniques ensure blocking rates are kept to a minimum by tailoring the agents’ behavior based on each node\u27s demand intake throughout the day. The third part of the thesis proposes using transfer learning augmented reinforcement learning to drive a network slicing-based solution in O-RAN to address the stringent and divergent demands of 5G applications. The main contributions of the thesis consist of three broad parts. The first is developing optimal and heuristic orchestration algorithms that improve demands’ performance and reliability in an edge computing environment. The second is using reinforcement learning to determine the appropriate spectral placement for demands within isolated optical paths, ensuring lower fragmentation and better throughput utilization. The third is developing a heuristic controlled transfer learning augmented reinforcement learning network slicing in an O-RAN environment. Hence, ensuring improved reliability while maintaining lower complexity than traditional placement techniques
    corecore