464 research outputs found

    Online Resource Allocation in Dynamic Optical Networks

    Get PDF
    Konventionelle, optische Transportnetze haben die Bereitstellung von High-Speed-Konnektivität in Form von langfristig installierten Verbindungen konstanter Bitrate ermöglicht. Die Einrichtungszeiten solcher Verbindungen liegen in der Größenordnung von Wochen, da in den meisten Fällen manuelle Eingriffe erforderlich sind. Nach der Installation bleiben die Verbindungen für Monate oder Jahre aktiv. Das Aufkommen von Grid Computing und Cloud-basierten Diensten bringt neue Anforderungen mit sich, die von heutigen optischen Transportnetzen nicht mehr erfüllt werden können. Dies begründet die Notwendigkeit einer Umstellung auf dynamische, optische Netze, welche die kurzfristige Bereitstellung von Bandbreite auf Nachfrage (Bandwidth on Demand - BoD) ermöglichen. Diese Netze müssen Verbindungen mit unterschiedlichen Bitratenanforderungen, mit zufälligen Ankunfts- und Haltezeiten und stringenten Einrichtungszeiten realisieren können. Grid Computing und Cloud-basierte Dienste führen in manchen Fällen zu Verbindungsanforderungen mit Haltezeiten im Bereich von Sekunden, wobei die Einrichtungszeiten im Extremfall in der Größenordnung von Millisekunden liegen können. Bei optischen Netzen für BoD muss der Verbindungsaufbau und -abbau, sowie das Netzmanagement ohne manuelle Eingriffe vonstattengehen. Die dafür notwendigen Technologien sind Flex-Grid-Wellenlängenmultiplexing, rekonfigurierbare optische Add / Drop-Multiplexer (ROADMs) und bandbreitenvariable, abstimmbare Transponder. Weiterhin sind Online-Ressourcenzuweisungsmechanismen erforderlich, um für jede eintreffende Verbindungsanforderung abhängig vom aktuellen Netzzustand entscheiden zu können, ob diese akzeptiert werden kann und welche Netzressourcen hierfür reserviert werden. Dies bedeutet, dass die Ressourcenzuteilung als Online-Optimierungsproblem behandelt werden muss. Die Entscheidungen sollen so getroffen werden, dass auf lange Sicht ein vorgegebenes Optimierungsziel erreicht wird. Die Ressourcenzuweisung bei dynamischen optischen Netzen lässt sich in die Teilfunktionen Routing- und Spektrumszuteilung (RSA), Verbindungsannahmekontrolle (CAC) und Dienstgütesteuerung (GoS Control) untergliedern. In dieser Dissertation wird das Problem der Online-Ressourcenzuteilung in dynamischen optischen Netzen behandelt. Es wird die Theorie der Markov-Entscheidungsprozesse (MDP) angewendet, um die Ressourcenzuweisung als Online-Optimierungsproblem zu formulieren. Die MDP-basierte Formulierung hat zwei Vorteile. Zum einen lassen sich verschiedene Optimierungszielfunktionen realisieren (z.B. die Minimierung der Blockierungswahrscheinlichkeiten oder die Maximierung der wirtschaftlichen Erlöse). Zum anderen lässt sich die Dienstgüte von Gruppen von Verbindungen mit spezifischen Verkehrsparametern gezielt beeinflussen (und damit eine gewisse GoS-Steuerung realisieren). Um das Optimierungsproblem zu lösen, wird in der Dissertation ein schnelles, adaptives und zustandsabhängiges Verfahren vorgestellt, dass im realen Netzbetrieb rekursiv ausgeführt wird und die Teilfunktionen RSA und CAC umfasst. Damit ist das Netz in der Lage, für jede eintreffende Verbindungsanforderung eine optimale Ressourcenzuweisung zu bestimmen. Weiterhin wird in der Dissertation die Implementierung des Verfahrens unter Verwendung eines 3-Way-Handshake-Protokolls für den Verbindungsaufbau betrachtet und ein analytisches Modell vorgestellt, um die Verbindungsaufbauzeit abzuschätzen. Die Arbeit wird abgerundet durch eine Bewertung der Investitionskosten (CAPEX) von dynamischen optischen Netzen. Es werden die wichtigsten Kostenfaktoren und die Beziehung zwischen den Kosten und der Performanz des Netzes analysiert. Die Leistungsfähigkeit aller in der Arbeit vorgeschlagenen Verfahren sowie die Genauigkeit des analytischen Modells zur Bestimmung der Verbindungsaufbauzeit wird durch umfangreiche Simulationen nachgewiesen.Conventional optical transport networks have leveraged the provisioning of high-speed connectivity in the form of long-term installed, constant bit-rate connections. The setup times of such connections are in the order of weeks, given that in most cases manual installation is required. Once installed, connections remain active for months or years. The advent of grid computing and cloud-based services brings new connectivity requirements which cannot be met by the present-day optical transport network. This has raised awareness on the need for a changeover to dynamic optical networks that enable the provisioning of bandwidth on demand (BoD) in the optical domain. These networks will have to serve connections with different bit-rate requirements, with random interarrival times and durations, and with stringent setup latencies. Ongoing research has shown that grid computing and cloud-based services may in some cases request connections with holding times ranging from seconds to hours, and with setup latencies that must be in the order of milliseconds. To provide BoD, dynamic optical networks must perform connection setup, maintenance and teardown without manual labour. For that, software-configurable networks are needed that are deployed with enough capacity to automatically establish connections. Recently, network architectures have been proposed for that purpose that embrace flex-grid wavelength division multiplexing, reconfigurable optical add/drop multiplexers, and bandwidth variable and tunable transponders as the main technology drivers. To exploit the benefits of these technologies, online resource allocation methods are necessary to ensure that during network operation the installed capacity is efficiently assigned to connections. As connections may arrive and depart randomly, the traffic matrix is unknown, and hence, each connection request submitted to the network has to be processed independently. This implies that resource allocation must be tackled as an online optimization problem which for each connection request, depending on the network state, decides whether the request is admitted or rejected. If admitted, a further decision is made on which resources are assigned to the connection. The decisions are so calculated that, in the long-run, a desired performance objective is optimized. To achieve its goal, resource allocation implements control functions for routing and spectrum allocation (RSA), connection admission control (CAC), and grade of service (GoS) control. In this dissertation we tackle the problem of online resource allocation in dynamic optical networks. For that, the theory of Markov decision processes (MDP) is applied to formulate resource allocation as an online optimization problem. An MDP-based formulation has two relevant advantages. First, the problem can be solved to optimize an arbitrarily defined performance objective (e.g. minimization of blocking probability or maximization of economic revenue). Secondly, it can provide GoS control for groups of connections with different statistical properties. To solve the optimization problem, a fast, adaptive and state-dependent online algorithm is proposed to calculate a resource allocation policy. The calculation is performed recursively during network operation, and uses algorithms for RSA and CAC. The resulting policy is a course of action that instructs the network how to process each connection request. Furthermore, an implementation of the method is proposed that uses a 3-way handshake protocol for connection setup, and an analytical performance evaluation model is derived to estimate the connection setup latency. Our study is complemented by an evaluation of the capital expenditures of dynamic optical networks. The main cost drivers are identified. The performance of the methods proposed in this thesis, including the accuracy of the analytical evaluation of the connection setup latency, were evaluated by simulations. The contributions from the thesis provide a novel approach that meets the requirements envisioned for resource allocation in dynamic optical networks

    Artificial intelligence (AI) methods in optical networks: A comprehensive survey

    Get PDF
    Producción CientíficaArtificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.Ministerio de Economía, Industria y Competitividad (Project EC2014-53071-C3-2-P, TEC2015-71932-REDT

    DeepDefrag: A deep reinforcement learning framework for spectrum defragmentation

    Get PDF
    Exponential growth of bandwidth demand, spurred by emerging network services with diverse characteristics and stringent performance requirements, drives the need for dynamic operation of optical networks, efficient use of spectral resources, and automation. One of the main challenges of dynamic, resource-efficient Elastic Optical Networks (EONs) is spectrum fragmentation. Fragmented, stranded spectrum slots lead to poor resource utilization and increase the blocking probability of incoming service requests. Conventional approaches for Spectrum Defragmentation (SD) apply various criteria to decide when, and which portion of the spectrum to defragment. However, these polices often address only a subset of tasks related to defragmentation, are not adaptable, and have limited automation potential. To address these issues, we propose DeepDefrag, a novel framework based on reinforcement learning that addresses the main aspects of the SD process: determining when to perform defragmentation, which connections to reconfigure, and which part of the spectrum to reallocate them to. DeepDefrag outperforms the well-known Older-First First-Fit (OF-FF) defragmentation heuristic, achieving lower blocking probability under smaller defragmentation overhead

    DeepDefrag: A deep reinforcement learning framework for spectrum defragmentation

    Get PDF
    Exponential growth of bandwidth demand, spurred by emerging network services with diverse characteristics and stringent performance requirements, drives the need for dynamic operation of optical networks, efficient use of spectral resources, and automation. One of the main challenges of dynamic, resource-efficient Elastic Optical Networks (EONs) is spectrum fragmentation. Fragmented, stranded spectrum slots lead to poor resource utilization and increase the blocking probability of incoming service requests. Conventional approaches for Spectrum Defragmentation (SD) apply various criteria to decide when, and which portion of the spectrum to defragment. However, these polices often address only a subset of tasks related to defragmentation, are not adaptable, and have limited automation potential. To address these issues, we propose DeepDefrag, a novel framework based on reinforcement learning that addresses the main aspects of the SD process: determining when to perform defragmentation, which connections to reconfigure, and which part of the spectrum to reallocate them to. DeepDefrag outperforms the well-known Older-First First-Fit (OF-FF) defragmentation heuristic, achieving lower blocking probability under smaller defragmentation overhead

    Stochastische Analyse und lernbasierte Algorithmen zur Ressourcenbereitstellung in optischen Netzwerken

    Get PDF
    The unprecedented growth in Internet traffic has driven the innovations in provisioning of optical resources as per the need of bandwidth demands such that the resource utilization and spectrum efficiency could be maximized. With the advent of the next generation flexible optical transponders and switches, the flexible-grid-based elastic optical network (EON) is foreseen as an alternative to the widely deployed fixed-grid-based wavelength division multiplexing networks. At the same time, the flexible resource provisioning also raises new challenges for EONs. One such challenge is the spectrum fragmentation. As network traffic varies over time, spectrum gets fragmented due to the setting up and tearing down of non-uniform bandwidth requests over aligned (i.e., continuous) and adjacent (i.e., contiguous) spectrum slices, which leads to a non-optimal spectrum allocation, and generally results in higher blocking probability and lower spectrum utilization in EONs. To address this issue, the allocation and reallocation of optical resources are required to be modeled accurately, and managed efficiently and intelligently. The modeling of routing and spectrum allocation in EONs with the spectrum contiguity and spectrum continuity constraints is well-investigated, but existing models do not consider the fragmentation issue resulted by these constraints and non-uniform bandwidth demands. This thesis addresses this issue and considers both the constraints to computing exact blocking probabilities in EONs with and without spectrum conversion, and with spectrum reallocation (known as defragmentation) for the first time using the Markovian approach. As the exact network models are not scalable with respect to the network size and capacity, this thesis proposes load-independent and load-dependent approximate models to compute approximate blocking probabilities in EONs. Results show that the connection blocking due to fragmentation can be reduced by using a spectrum conversion or a defragmentation approach, but it can not be eliminated in a mesh network topology. This thesis also deals with the important network resource provisioning task in EONs. To this end, it first presents algorithmic solutions to efficiently allocate and reallocate spectrum resources using the fragmentation factor along spectral, time, and spatial dimensions. Furthermore, this thesis highlights the role of machine learning techniques in alleviating issues in static provisioning of optical resources, and presents two use-cases: handling time-varying traffic in optical data center networks, and reducing energy consumption and allocating spectrum proportionately to traffic classes in fiber-wireless networks.Die flexible Nutzung des Spektrums bringt in Elastischen Optischen Netze (EON) neue Herausforderungen mit sich, z.B., die Fragmentierung des Spektrums. Die Fragmentierung entsteht dadurch, dass die Netzwerkverkehrslast sich im Laufe der Zeit ändert und so wird das Spektrum aufgrund des Verbindungsaufbaus und -abbaus fragmentiert. Das für eine Verbindung notwendige Spektrum wird durch aufeinander folgende (kontinuierliche) und benachbarte (zusammenhängende) Spektrumsabschnitte (Slots) gebildet. Dies führt nach den zahlreichen Reservierungen und Freisetzungen des Spektrums zu einer nicht optimalen Zuordnung, die in einer höheren Blockierungs-wahrscheinlichkeit der neuen Verbindungsanfragen und einer geringeren Auslastung von EONs resultiert. Um dieses Problem zu lösen, müssen die Zuweisung und Neuzuordnung des Spektrums in EONs genau modelliert und effizient sowie intelligent verwaltet werden. Diese Arbeit beschäftigt sich mit dem Fragmentierungsproblem und berücksichtigt dabei die beiden Einschränkungen: Kontiguität und Kontinuität. Unter diesen Annahmen wurden analytische Modelle zur Berechnung einer exakten Blockierungswahrscheinlichkeit in EONs mit und ohne Spektrumskonvertierung erarbeitet. Außerdem umfasst diese Arbeit eine Analyse der Blockierungswahrscheinlichkeit im Falle einer Neuzuordnung des Sprektrums (Defragmentierung). Diese Blockierungsanalyse wird zum ersten Mal mit Hilfe der Markov-Modelle durchgeführt. Da die exakten analytischen Modelle hinsichtlich der Netzwerkgröße und -kapazität nicht skalierbar sind, werden in dieser Dissertation verkehrslastunabhängige und verkehrslastabhängige Approximationsmodelle vorgestellt. Diese Modelle bieten eine Näherung der Blockierungswahrscheinlichkeiten in EONs. Die Ergebnisse zeigen, dass die Blockierungswahrscheinlichkeit einer Verbindung aufgrund von einer Fragmentierung des Spektrums durch die Verwendung einer Spektrumkonvertierung oder eines Defragmentierungsverfahrens verringert werden kann. Eine effiziente Bereitstellung der optischen Netzwerkressourcen ist eine wichtige Aufgabe von EONs. Deswegen befasst sich diese Arbeit mit algorithmischen Lösungen, die Spektrumressource mithilfe des Fragmentierungsfaktors von Spektral-, Zeit- und räumlichen Dimension effizient zuweisen und neu zuordnen. Darüber hinaus wird die Rolle des maschinellen Lernens (ML) für eine verbesserte Bereitstellung der optischen Ressourcen untersucht und das ML basierte Verfahren mit der statischen Ressourcenzuweisung verglichen. Dabei werden zwei Anwendungsbeispiele vorgestellt und analysiert: der Umgang mit einer zeitveränderlichen Verkehrslast in optischen Rechenzentrumsnetzen, und eine Verringerung des Energieverbrauchs und die Zuweisung des Spektrums proportional zu Verkehrsklassen in kombinierten Glasfaser-Funknetzwerken

    Thirty-second Annual Symposium of Trinity College Undergraduate Research

    Get PDF
    2019 annual volume of abstracts for science research projects conducted by students at Trinity College

    Designing smart garments for rehabilitation

    Get PDF

    Machine Learning for Unmanned Aerial System (UAS) Networking

    Get PDF
    Fueled by the advancement of 5G new radio (5G NR), rapid development has occurred in many fields. Compared with the conventional approaches, beamforming and network slicing enable 5G NR to have ten times decrease in latency, connection density, and experienced throughput than 4G long term evolution (4G LTE). These advantages pave the way for the evolution of Cyber-physical Systems (CPS) on a large scale. The reduction of consumption, the advancement of control engineering, and the simplification of Unmanned Aircraft System (UAS) enable the UAS networking deployment on a large scale to become feasible. The UAS networking can finish multiple complex missions simultaneously. However, the limitations of the conventional approaches are still a big challenge to make a trade-off between the massive management and efficient networking on a large scale. With 5G NR and machine learning, in this dissertation, my contributions can be summarized as the following: I proposed a novel Optimized Ad-hoc On-demand Distance Vector (OAODV) routing protocol to improve the throughput of Intra UAS networking. The novel routing protocol can reduce the system overhead and be efficient. To improve the security, I proposed a blockchain scheme to mitigate the malicious basestations for cellular connected UAS networking and a proof-of-traffic (PoT) to improve the efficiency of blockchain for UAS networking on a large scale. Inspired by the biological cell paradigm, I proposed the cell wall routing protocols for heterogeneous UAS networking. With 5G NR, the inter connections between UAS networking can strengthen the throughput and elasticity of UAS networking. With machine learning, the routing schedulings for intra- and inter- UAS networking can enhance the throughput of UAS networking on a large scale. The inter UAS networking can achieve the max-min throughput globally edge coloring. I leveraged the upper and lower bound to accelerate the optimization of edge coloring. This dissertation paves a way regarding UAS networking in the integration of CPS and machine learning. The UAS networking can achieve outstanding performance in a decentralized architecture. Concurrently, this dissertation gives insights into UAS networking on a large scale. These are fundamental to integrating UAS and National Aerial System (NAS), critical to aviation in the operated and unmanned fields. The dissertation provides novel approaches for the promotion of UAS networking on a large scale. The proposed approaches extend the state-of-the-art of UAS networking in a decentralized architecture. All the alterations can contribute to the establishment of UAS networking with CPS

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    Autonomous Traffic Engineering using Deep Reinforcement Learning

    Get PDF
    The evolution of communication technologies in the past few decades, has led to a huge increase in the complexity and the overall size of telecommunication networks. This phenomenon has increased the need for innovation in the field of Traffic Engineering (TE), as the already existing solutions are not flexible enough to adapt to these changes. With the appearance of 5G technologies, the urgency to revolutionize the field is higher than ever and the softwarization and virtualization of the infrastructure bring new possibilities for TE optimization, namely the possible use of Artificial Intelligence (AI) based methods for Traffic Management. The recent advances in AI have provided model-free optimization methods with algorithms like Deep Reinforcement Learning (DRL) that can be used to optimize traffic distributions in complex and hard to model Network scenarios. This thesis aims to provide a DRL-based solution for TE where an agent is capable of making routing decisions based on the current state of the network, with the goal of balancing the load between the network paths. A DRL agent is developed and trained in two different scenarios where the traffic that already exists in the network is generated randomly or according to a systematic pattern. A simulation environment was developed to train and evaluate the DRL agent.A evolução das tecnologias de comunicação nas útlimas décadas, tem dado origem a um grande aumento na complexidade e no tamanho das redes de telecomunicações. Este fenómeno tem aumentado a necessidade de inovação na área de Traffic Engineering (TE), visto que as soluções já existentes não são flexíveis o suficiente para se adaptarem a estas mudanças. Com a aproximação das tecnologias 5G, a urgência para revolucionar a área está cada vez maior e a softwarização e a virtualização das infraestruturas trazem novas possibilidades para otimizações de TE, nomeadamente o possível uso de métodos baseados em Inteligência Artificial (IA) para gerir o tráfego da rede. Os avanços recentes de IA têm criado métodos de otimização independentes de modelos (model-free), como Deep Reinforcement Learning (DRL) que pode ser usado para otimizar a distribuição de tráfego em cenários de redes complexas e difíceis de modelar. Esta dissertação tem como objetivo implementar uma solução à base de DRL, em que é desenhado um agente que é capaz de tomar decisões de encaminhamento, com o objetivo de gerir o tráfego pelos caminhos da rede. Um agente DRL é treinado em dois cenários diferentes onde o tráfego já existente na rede é gerado aleatóriamente ou de acordo com um padrão pré-definido. Foi criado um ambiente para treinar e para avaliar o agente DRL
    corecore