749 research outputs found

    Cooperative Learning for Disaggregated Delay Modeling in Multidomain Networks

    Get PDF
    Accurate delay estimation is one of the enablers of future network connectivity services, as it facilitates the application layer to anticipate network performance. If such connectivity services require isolation (slicing), such delay estimation should not be limited to a maximum value defined in the Service Level Agreement, but to a finer-grained description of the expected delay in the form of, e.g., a continuous function of the load. Obtaining accurate end-to-end (e2e) delay modeling is even more challenging in a multi-operator (Multi-AS) scenario, where the provisioning of e2e connectivity services is provided across heterogeneous multi-operator (Multi-AS or just domains) networks. In this work, we propose a collaborative environment, where each domain Software Defined Networking (SDN) controller models intra-domain delay components of inter-domain paths and share those models with a broker system providing the e2e connectivity services. The broker, in turn, models the delay of inter-domain links based on e2e monitoring and the received intra-domain models. Exhaustive simulation results show that composing e2e models as the summation of intra-domain network and inter-domain link delay models provides many benefits and increasing performance over the models obtained from e2e measurements

    Distributed collaborative knowledge management for optical network

    Get PDF
    Network automation has been long time envisioned. In fact, the Telecommunications Management Network (TMN), defined by the International Telecommunication Union (ITU), is a hierarchy of management layers (network element, network, service, and business management), where high-level operational goals propagate from upper to lower layers. The network management architecture has evolved with the development of the Software Defined Networking (SDN) concept that brings programmability to simplify configuration (it breaks down high-level service abstraction into lower-level device abstractions), orchestrates operation, and automatically reacts to changes or events. Besides, the development and deployment of solutions based on Artificial Intelligence (AI) and Machine Learning (ML) for making decisions (control loop) based on the collected monitoring data enables network automation, which targets at reducing operational costs. AI/ML approaches usually require large datasets for training purposes, which are difficult to obtain. The lack of data can be compensated with a collective self-learning approach. In this thesis, we go beyond the aforementioned traditional control loop to achieve an efficient knowledge management (KM) process that enhances network intelligence while bringing down complexity. In this PhD thesis, we propose a general architecture to support KM process based on four main pillars, which enable creating, sharing, assimilating and using knowledge. Next, two alternative strategies based on model inaccuracies and combining model are proposed. To highlight the capacity of KM to adapt to different applications, two use cases are considered to implement KM in a purely centralized and distributed optical network architecture. Along with them, various policies are considered for evaluating KM in data- and model- based strategies. The results target to minimize the amount of data that need to be shared and reduce the convergence error. We apply KM to multilayer networks and propose the PILOT methodology for modeling connectivity services in a sandbox domain. PILOT uses active probes deployed in Central Offices (COs) to obtain real measurements that are used to tune a simulation scenario reproducing the real deployment with high accuracy. A simulator is eventually used to generate large amounts of realistic synthetic data for ML training and validation. We apply KM process also to a more complex network system that consists of several domains, where intra-domain controllers assist a broker plane in estimating accurate inter-domain delay. In addition, the broker identifies and corrects intra-domain model inaccuracies, as well as it computes an accurate compound model. Such models can be used for quality of service (QoS) and accurate end-to-end delay estimations. Finally, we investigate the application on KM in the context of Intent-based Networking (IBN). Knowledge in terms of traffic model and/or traffic perturbation is transferred among agents in a hierarchical architecture. This architecture can support autonomous network operation, like capacity management.La automatización de la red se ha concebido desde hace mucho tiempo. De hecho, la red de gestión de telecomunicaciones (TMN), definida por la Unión Internacional de Telecomunicaciones (ITU), es una jerarquía de capas de gestión (elemento de red, red, servicio y gestión de negocio), donde los objetivos operativos de alto nivel se propagan desde las capas superiores a las inferiores. La arquitectura de gestión de red ha evolucionado con el desarrollo del concepto de redes definidas por software (SDN) que brinda capacidad de programación para simplificar la configuración (descompone la abstracción de servicios de alto nivel en abstracciones de dispositivos de nivel inferior), organiza la operación y reacciona automáticamente a los cambios o eventos. Además, el desarrollo y despliegue de soluciones basadas en inteligencia artificial (IA) y aprendizaje automático (ML) para la toma de decisiones (bucle de control) en base a los datos de monitorización recopilados permite la automatización de la red, que tiene como objetivo reducir costes operativos. AI/ML generalmente requieren un gran conjunto de datos para entrenamiento, los cuales son difíciles de obtener. La falta de datos se puede compensar con un enfoque de autoaprendizaje colectivo. En esta tesis, vamos más allá del bucle de control tradicional antes mencionado para lograr un proceso eficiente de gestión del conocimiento (KM) que mejora la inteligencia de la red al tiempo que reduce la complejidad. En esta tesis doctoral, proponemos una arquitectura general para apoyar el proceso de KM basada en cuatro pilares principales que permiten crear, compartir, asimilar y utilizar el conocimiento. A continuación, se proponen dos estrategias alternativas basadas en inexactitudes del modelo y modelo de combinación. Para resaltar la capacidad de KM para adaptarse a diferentes aplicaciones, se consideran dos casos de uso para implementar KM en una arquitectura de red óptica puramente centralizada y distribuida. Junto a ellos, se consideran diversas políticas para evaluar KM en estrategias basadas en datos y modelos. Los resultados apuntan a minimizar la cantidad de datos que deben compartirse y reducir el error de convergencia. Aplicamos KM a redes multicapa y proponemos la metodología PILOT para modelar servicios de conectividad en un entorno aislado. PILOT utiliza sondas activas desplegadas en centrales de telecomunicación (CO) para obtener medidas reales que se utilizan para ajustar un escenario de simulación que reproducen un despliegue real con alta precisión. Un simulador se utiliza finalmente para generar grandes cantidades de datos sintéticos realistas para el entrenamiento y la validación de ML. Aplicamos el proceso de KM también a un sistema de red más complejo que consta de varios dominios, donde los controladores intra-dominio ayudan a un plano de bróker a estimar el retardo entre dominios de forma precisa. Además, el bróker identifica y corrige las inexactitudes de los modelos intra-dominio, así como también calcula un modelo compuesto preciso. Estos modelos se pueden utilizar para estimar la calidad de servicio (QoS) y el retardo extremo a extremo de forma precisa. Finalmente, investigamos la aplicación en KM en el contexto de red basada en intención (IBN). El conocimiento en términos de modelo de tráfico y/o perturbación del tráfico se transfiere entre agentes en una arquitectura jerárquica. Esta arquitectura puede soportar el funcionamiento autónomo de la red, como la gestión de la capacidad.Postprint (published version

    Multidomain Demand Modeling in Design for Market Systems.

    Full text link
    Consumers make choices based not only on functional product attributes (e.g., fuel economy) but also on non-functional attributes (e.g., vehicle form). Consequently, ignoring non-functional product attributes in demand modeling can lead to product designs less attractive to consumers. This dissertation focuses on two major non-functional product attributes: (i) aesthetic product form as a perceptual product attribute and (ii) services as external product attributes. A limitation in conventional discrete choice analysis is that it handles functional and non-functional attributes within a single demand model. An aesthetic product form is generated by a potentially huge number of geometric variables; thus, it cannot be quantified simply and it is difficult to integrate with functional attributes. Similarly, when considering services, it is challenging to incorporate the relationship (or channel) between product and service attributes (or multiple providers) into a single demand model. This dissertation proposes a multidomain demand modeling approach to integrate functional and non-functional attributes, whose values are decided by different design domains, into a single demand model. We employ consumer choice models from Marketing, systems design optimization from Engineering, machine learning algorithms and human-computer interaction from Computer Science, and location network models from Operations Research within a design optimization framework. This work addresses three demand models: (i) a demand model for engineering and industrial design, (ii) a demand model for engineering and service design, and (iii) a demand model for engineering and operations design. The benefits of this unified approach is demonstrated through three respective design applications including gasoline vehicle design, electric vehicle and charging station location design, and tablet and e-book service design. The contribution of this research is in helping resolve trade-offs between conflicted design domain decisions, by integrating disparate attributes into a multidomain demand model. This work consequently extends the scope of Design for Market Systems from product design to business model design by considering external product attributes.PhDDesign ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110471/1/nwkang_1.pd

    Software Defined Applications in Cellular and Optical Networks

    Get PDF
    abstract: Small wireless cells have the potential to overcome bottlenecks in wireless access through the sharing of spectrum resources. A novel access backhaul network architecture based on a Smart Gateway (Sm-GW) between the small cell base stations, e.g., LTE eNBs, and the conventional backhaul gateways, e.g., LTE Servicing/Packet Gateways (S/P-GWs) has been introduced to address the bottleneck. The Sm-GW flexibly schedules uplink transmissions for the eNBs. Based on software defined networking (SDN) a management mechanism that allows multiple operator to flexibly inter-operate via multiple Sm-GWs with a multitude of small cells has been proposed. This dissertation also comprehensively survey the studies that examine the SDN paradigm in optical networks. Along with the PHY functional split improvements, the performance of Distributed Converged Cable Access Platform (DCCAP) in the cable architectures especially for the Remote-PHY and Remote-MACPHY nodes has been evaluated. In the PHY functional split, in addition to the re-use of infrastructure with a common FFT module for multiple technologies, a novel cross functional split interaction to cache the repetitive QAM symbols across time at the remote node to reduce the transmission rate requirement of the fronthaul link has been proposed.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Enhancing e-Infrastructures with Advanced Technical Computing: Parallel MATLAB® on the Grid

    Get PDF
    MATLAB® is widely used within the engineering and scientific fields as the language and environment for technical computing, while collaborative Grid computing on e-Infrastructures is used by scientific communities to deliver a faster time to solution. MATLAB allows users to express parallelism in their applications, and then execute code on multiprocessor environments such as large-scale e-Infrastructures. This paper demonstrates the integration of MATLAB and Grid technology with a representative implementation that uses gLite middleware to run parallel programs. Experimental results highlight the increases in productivity and performance that users obtain with MATLAB parallel computing on Grids

    Enabling Hybrid Architectures and Mesh Network Topologies to Support the Global Multi-Domain Community

    Get PDF
    The turn of the new decade also represents the dawn of a new shift in domain operations. Concepts such as “Space Dial Tone,” reliable global access to internet, on-demand Earth observation, and remote sensing, while still not fully realized, are no longer purely imaginative. These concepts are in high demand and are coupled with the goals of Global Multi-Domain Operations (MDO). Small satellites (smallsats) have emerged as functionally reliable platforms, driving the development of next-generation satellite constellations. To achieve the potential of tomorrow’s technology, these constellations must embrace space mission architectures based on interoperable, open-system constructs such as hybrid architectures and mesh network topologies. This paper presents the full timeline for realization of multi-node, disparate (sovereign, coalition, commercial, etc.) multi-domain (Space, Air, Maritime, Land, and Cyber) systems to support future space mission architectures. It identifies and discusses the underlying technologies needed to bring new “system-of-systems” concepts to operational capability. Technologies to be discussed include: message-agnostic physical/protocol “Bridges”; Machine-to-Machine (M2M) data sharing enabled through Electronic Data Sheet (EDS) standards; and, new concepts related to Artificial Intelligence (AI) enabled human decision making. Tying these technologies together effectively will positively impact the smallsat market and fundamentally change mission architectures in the near future
    corecore