7,930 research outputs found

    Multilayer Networks

    Full text link
    In most natural and engineered systems, a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications. Such systems include multiple subsystems and layers of connectivity, and it is important to take such "multilayer" features into account to try to improve our understanding of complex systems. Consequently, it is necessary to generalize "traditional" network theory by developing (and validating) a framework and associated tools to study multilayer systems in a comprehensive fashion. The origins of such efforts date back several decades and arose in multiple disciplines, and now the study of multilayer networks has become one of the most important directions in network science. In this paper, we discuss the history of multilayer networks (and related concepts) and review the exploding body of work on such networks. To unify the disparate terminology in the large body of recent work, we discuss a general framework for multilayer networks, construct a dictionary of terminology to relate the numerous existing concepts to each other, and provide a thorough discussion that compares, contrasts, and translates between related notions such as multilayer networks, multiplex networks, interdependent networks, networks of networks, and many others. We also survey and discuss existing data sets that can be represented as multilayer networks. We review attempts to generalize single-layer-network diagnostics to multilayer networks. We also discuss the rapidly expanding research on multilayer-network models and notions like community structure, connected components, tensor decompositions, and various types of dynamical processes on multilayer networks. We conclude with a summary and an outlook.Comment: Working paper; 59 pages, 8 figure

    Autonomous and reliable operation of multilayer optical networks

    Get PDF
    This Ph.D. thesis focuses on the reliable autonomous operation of multilayer optical networks. The first objective focuses on the reliability of the optical network and proposes methods for health analysis related to Quality of Transmission (QoT) degradation. Such degradation is produced by soft-failures in optical devices and fibers in core and metro segments of the operators’ transport networks. Here, we compare estimated and measured QoT in the optical transponder by using a QoT tool based on GNPy. We show that the changes in the values of input parameters of the QoT model representing optical devices can explain the deviations and degradation in performance of such devices. We use reverse engineering to estimate the value of those parameters that explain the observed QoT. We show by simulation a large anticipation in soft-failure detection, localization and identification of degradation before affecting the network. Finally, for validating our approach, we experimentally observe the high accuracy in the estimation of the modeling parameters. The second objective focuses on multilayer optical networks, where lightpaths are used to connect packet nodes thus creating virtual links (vLink). Specifically, we study how lightpaths can be managed to provide enough capacity to the packet layer without detrimental effects in their Quality of Service (QoS), like added delays or packet losses, and at the same time minimize energy consumption. Such management must be as autonomous as possible to minimize human intervention. We study the autonomous operation of optical connections based on digital subcarrier multiplexing (DSCM). We propose several solutions for the autonomous operation of DSCM systems. In particular, the combination of two modules running in the optical node and in the optical transponder activate and deactivate subcarriers to adapt the capacity of the optical connection to the upper layer packet traffic. The module running in the optical node is part of our Intent-based Networking (IBN) solution and implements prediction to anticipate traffic changes. Our comprehensive study demonstrates the feasibility of DSCM autonomous operation and shows large cost savings in terms of energy consumption. In addition, our study provides a guideline to help vendors and operators to adopt the proposed solutions. The final objective targets at automating packet layer connections (PkC). Automating the capacity required by PkCs can bring further cost reduction to network operators, as it can limit the resources used at the optical layer. However, such automation requires careful design to avoid any QoS degradation, which would impact Service Level Agreement (SLA) in the case that the packet flow is related to some customer connection. We study autonomous packet flow capacity management. We apply RL techniques and propose a management lifecycle consisting of three different phases: 1) a self-tuned threshold-based approach for setting up the connection until enough data is collected, which enables understanding the traffic characteristics; 2) RL operation based on models pre-trained with generic traffic profiles; and 3) RL operation based on models trained with the observed traffic. We show that RL algorithms provide poor performance until they learn optimal policies, as well as when the traffic characteristics change over time. The proposed lifecycle provides remarkable performance from the starting of the connection and it shows the robustness while facing changes in traffic. The contribution is twofold: 1) and on the one hand, we propose a solution based on RL, which shows superior performance with respect to the solution based on prediction; and 2) because vLinks support packet connections, coordination between the intents of both layers is proposed. In this case, the actions taken by the individual PkCs are used by the vLink intent. The results show noticeable performance compared to independent vLink operation.Esta tesis doctoral se centra en la operación autónoma y confiable de redes ópticas multicapa. El primer objetivo se centra en la fiabilidad de la red óptica y propone métodos para el análisis del estado relacionados con la degradación de la calidad de la transmisión (QoT). Dicha degradación se produce por fallos en dispositivos ópticos y fibras en las redes de transporte de los operadores que no causan el corte de la señal. Comparamos el QoT estimado y medido en el transpondedor óptico mediante el uso de una herramienta de QoT basada en GNPy. Mostramos que los cambios en los valores de los parámetros de entrada del modelo QoT que representan los dispositivos ópticos pueden explicar las desviaciones y la degradación en el rendimiento de dichos dispositivos. Usamos ingeniería inversa para estimar el valor de aquellos parámetros que explican el QoT observado. Mostramos, mediante simulación, una gran anticipación en la detección, localización e identificación de fallas leves antes de afectar la red. Finalmente, validamos nuestro método de forma experimental y comprobamos la alta precisión en la estimación de los parámetros de los modelos. El segundo objetivo se centra en las redes ópticas multicapa, donde se utilizan conexiones ópticas (lightpaths) para conectar nodos de paquetes creando así enlaces virtuales (vLink). Específicamente, estudiamos cómo se pueden gestionar los lightpaths para proporcionar suficiente capacidad a la capa de paquetes sin efectos perjudiciales en su calidad de servicio (QoS), como retrasos adicionales o pérdidas de paquetes, y al mismo tiempo minimizar el consumo de energía. Estudiamos el funcionamiento autónomo de conexiones ópticas basadas en multiplexación de subportadoras digitales (DSCM) y proponemos soluciones para su funcionamiento autónomo. En particular, la combinación de dos módulos que se ejecutan en el nodo óptico y en el transpondedor óptico activan y desactivan subportadoras para adaptar la capacidad de la conexión óptica al tráfico de paquetes. El módulo que se ejecuta en el nodo óptico implementa la predicción para anticipar los cambios de tráfico. Nuestro estudio demuestra la viabilidad de la operación autónoma de DSCM y muestra un gran ahorro de consumo de energía. El objetivo final es la automatización de conexiones de capa de paquete (PkC). La automatización de la capacidad requerida por las PkC puede generar una mayor reducción de costes, ya que puede limitar los recursos utilizados en la capa óptica. Sin embargo, dicha automatización requiere un diseño cuidadoso para evitar cualquier degradación de QoS, lo que afectaría acuerdos de nivel de servicio (SLA) en el caso de que el flujo de paquetes esté relacionado con alguna conexión del cliente. Estudiamos la gestión autónoma de la capacidad del flujo de paquetes. Aplicamos RL y proponemos un ciclo de vida de gestión con tres fases: 1) un enfoque basado en umbrales auto ajustados para configurar la conexión hasta que se recopilen suficientes datos, lo que permite comprender las características del tráfico; 2) operación RL basada en modelos pre-entrenados con perfiles de tráfico genéricos; y 3) operación de RL en base a modelos entrenados con el tránsito observado. Mostramos que los algoritmos de RL ofrecen un desempeño deficiente hasta que aprenden las políticas óptimas, así cuando las características del tráfico cambian con el tiempo. El ciclo de vida propuesto proporciona un rendimiento notable desde el inicio de la conexión y muestra la robustez frente a cambios en el tráfico. La contribución es doble: 1) proponemos una solución basada en RL que muestra un rendimiento superior que la solución basada en predicción; y 2) debido a que los vLinks admiten conexiones de paquetes, se propone la coordinación entre las intenciones de ambas capas. En este caso, la intención de vLink utiliza las acciones realizadas por los PkC individuales. Los resultados muestran un rendimiento notable en comparación con la operación independiente de vLink.Postprint (published version

    Autonomic disaggregated multilayer networking

    Get PDF
    Focused on reducing capital expenditures by opening the data plane to multiple vendors without impacting performance, node disaggregation is attracting the interest of network operators. Although the software-defined networking (SDN) paradigm is key for the control of such networks, the increased complexity of multilayer networks strictly requires monitoring/telemetry and data analytics capabilities to assist in creating and operating self-managed (autonomic) networks. Such autonomicity greatly reduces operational expenditures, while improving network performance. In this context, a monitoring and data analytics (MDA) architecture consisting of centralized data storage with data analytics capabilities, together with a generic node agent for monitoring/telemetry supporting disaggregation, is presented. A YANG data model that allows one to clearly separate responsibilities for monitoring configuration from node configuration is also proposed. The MDA architecture and YANG data models are experimentally demonstrated through three different use cases: i) virtual link creation supported by an optical connection, where monitoring is automatically activated; ii) multilayer self-configuration after bit error rate (BER) degradation detection, where a modulation format adaptation is recommended for the SDN controller to minimize errors (this entails reducing the capacity of both the virtual link and supported multiprotocol label switching-transport profile (MPLS-TP) paths); and iii) optical layer selfhealing, including failure localization at the optical layer to find the cause of BER degradation. A combination of active and passive monitoring procedures allows one to localize the cause of the failure, leading to lightpath rerouting recommendations toward the SDN controller avoiding the failing element(s).Peer ReviewedPostprint (author's final draft

    Measuring reliability of aspect-oriented software using a combination of artificial neural network and imperialist competitive algorithm

    Get PDF
    Aspect-oriented software engineering provides new ways to produce and deliver products and ultimately leads to reliable software. Reliability is an important issue contributing to the quality of software. Thus, software engineers need proven mechanisms to determine the extent of software reliability. In this paper, a method for measuring reliability is proposed which takes advantage of a Multilayer Perceptron Artificial Neural Network (MLPANN). Furthermore, an Imperialist Competitive Algorithm (ICA) is used to optimize the weights to improve network performance. Finally, relying on Root Mean Square Error (RMSE), the proposed approach is compared to a hybrid Genetic Algorithm- Artificial Neural Network (GA-ANN) method. The results show that the proposed approach exhibits lower error

    Distributed collaborative knowledge management for optical network

    Get PDF
    Network automation has been long time envisioned. In fact, the Telecommunications Management Network (TMN), defined by the International Telecommunication Union (ITU), is a hierarchy of management layers (network element, network, service, and business management), where high-level operational goals propagate from upper to lower layers. The network management architecture has evolved with the development of the Software Defined Networking (SDN) concept that brings programmability to simplify configuration (it breaks down high-level service abstraction into lower-level device abstractions), orchestrates operation, and automatically reacts to changes or events. Besides, the development and deployment of solutions based on Artificial Intelligence (AI) and Machine Learning (ML) for making decisions (control loop) based on the collected monitoring data enables network automation, which targets at reducing operational costs. AI/ML approaches usually require large datasets for training purposes, which are difficult to obtain. The lack of data can be compensated with a collective self-learning approach. In this thesis, we go beyond the aforementioned traditional control loop to achieve an efficient knowledge management (KM) process that enhances network intelligence while bringing down complexity. In this PhD thesis, we propose a general architecture to support KM process based on four main pillars, which enable creating, sharing, assimilating and using knowledge. Next, two alternative strategies based on model inaccuracies and combining model are proposed. To highlight the capacity of KM to adapt to different applications, two use cases are considered to implement KM in a purely centralized and distributed optical network architecture. Along with them, various policies are considered for evaluating KM in data- and model- based strategies. The results target to minimize the amount of data that need to be shared and reduce the convergence error. We apply KM to multilayer networks and propose the PILOT methodology for modeling connectivity services in a sandbox domain. PILOT uses active probes deployed in Central Offices (COs) to obtain real measurements that are used to tune a simulation scenario reproducing the real deployment with high accuracy. A simulator is eventually used to generate large amounts of realistic synthetic data for ML training and validation. We apply KM process also to a more complex network system that consists of several domains, where intra-domain controllers assist a broker plane in estimating accurate inter-domain delay. In addition, the broker identifies and corrects intra-domain model inaccuracies, as well as it computes an accurate compound model. Such models can be used for quality of service (QoS) and accurate end-to-end delay estimations. Finally, we investigate the application on KM in the context of Intent-based Networking (IBN). Knowledge in terms of traffic model and/or traffic perturbation is transferred among agents in a hierarchical architecture. This architecture can support autonomous network operation, like capacity management.La automatización de la red se ha concebido desde hace mucho tiempo. De hecho, la red de gestión de telecomunicaciones (TMN), definida por la Unión Internacional de Telecomunicaciones (ITU), es una jerarquía de capas de gestión (elemento de red, red, servicio y gestión de negocio), donde los objetivos operativos de alto nivel se propagan desde las capas superiores a las inferiores. La arquitectura de gestión de red ha evolucionado con el desarrollo del concepto de redes definidas por software (SDN) que brinda capacidad de programación para simplificar la configuración (descompone la abstracción de servicios de alto nivel en abstracciones de dispositivos de nivel inferior), organiza la operación y reacciona automáticamente a los cambios o eventos. Además, el desarrollo y despliegue de soluciones basadas en inteligencia artificial (IA) y aprendizaje automático (ML) para la toma de decisiones (bucle de control) en base a los datos de monitorización recopilados permite la automatización de la red, que tiene como objetivo reducir costes operativos. AI/ML generalmente requieren un gran conjunto de datos para entrenamiento, los cuales son difíciles de obtener. La falta de datos se puede compensar con un enfoque de autoaprendizaje colectivo. En esta tesis, vamos más allá del bucle de control tradicional antes mencionado para lograr un proceso eficiente de gestión del conocimiento (KM) que mejora la inteligencia de la red al tiempo que reduce la complejidad. En esta tesis doctoral, proponemos una arquitectura general para apoyar el proceso de KM basada en cuatro pilares principales que permiten crear, compartir, asimilar y utilizar el conocimiento. A continuación, se proponen dos estrategias alternativas basadas en inexactitudes del modelo y modelo de combinación. Para resaltar la capacidad de KM para adaptarse a diferentes aplicaciones, se consideran dos casos de uso para implementar KM en una arquitectura de red óptica puramente centralizada y distribuida. Junto a ellos, se consideran diversas políticas para evaluar KM en estrategias basadas en datos y modelos. Los resultados apuntan a minimizar la cantidad de datos que deben compartirse y reducir el error de convergencia. Aplicamos KM a redes multicapa y proponemos la metodología PILOT para modelar servicios de conectividad en un entorno aislado. PILOT utiliza sondas activas desplegadas en centrales de telecomunicación (CO) para obtener medidas reales que se utilizan para ajustar un escenario de simulación que reproducen un despliegue real con alta precisión. Un simulador se utiliza finalmente para generar grandes cantidades de datos sintéticos realistas para el entrenamiento y la validación de ML. Aplicamos el proceso de KM también a un sistema de red más complejo que consta de varios dominios, donde los controladores intra-dominio ayudan a un plano de bróker a estimar el retardo entre dominios de forma precisa. Además, el bróker identifica y corrige las inexactitudes de los modelos intra-dominio, así como también calcula un modelo compuesto preciso. Estos modelos se pueden utilizar para estimar la calidad de servicio (QoS) y el retardo extremo a extremo de forma precisa. Finalmente, investigamos la aplicación en KM en el contexto de red basada en intención (IBN). El conocimiento en términos de modelo de tráfico y/o perturbación del tráfico se transfiere entre agentes en una arquitectura jerárquica. Esta arquitectura puede soportar el funcionamiento autónomo de la red, como la gestión de la capacidad.Postprint (published version
    • …
    corecore