61 research outputs found

    A Shapley-value Mechanism for Bandwidth On Demand between Datacenters

    Get PDF
    postprin

    Design and implementation of an intelligent end-to-end network QoS system

    Full text link
    N/

    On the dynamics of valley times and its application to bulk-transfer scheduling

    Full text link
    Periods of low load have been used for the scheduling of non-interactive tasks since the early stages of computing. Nowadays, the scheduling of bulk transfers—i.e., large-volume transfers without precise timing, such as database distribution, resources replication or backups—stands out among such tasks, given its direct effect on both the performance and billing of networks. Through visual inspection of traffic-demand curves of diverse points of presence (PoP), either a network, link, Internet service provider or Internet exchange point, it becomes apparent that low-use periods of bandwidth demands occur at early morning, showing a noticeable convex shape. Such observation led us to study and model the time when such demands reach their minimum, on what we have named valley time of a PoP, as an approximation to the ideal moment to carry out bulk transfers. After studying and modeling single-PoP scenarios both temporally and spatially seeking homogeneity in the phenomenon, as well as its extension to multi-PoP scenarios or paths—a meta-PoP constructed as the aggregation of several single PoPs—, we propose a final predictor system for the valley time. This tool works as an oracle for scheduling bulk transfers, with different versions according to time scales and the desired trade-off between precision and complexity. The evaluation of the system, named VTP, has proven its usefulness with errors below an hour on estimating the occurrence of valley times, as well as errors around 10% in terms of bandwidth between the prediction and actual valley trafficThis work has been partially supported by the European Commission under the project H2020 METRO-HAUL (Project ID: 761727

    Orchestrating datacenters and networks to facilitate the telecom cloud

    Get PDF
    In the Internet of services, information technology (IT) infrastructure providers play a critical role in making the services accessible to end-users. IT infrastructure providers host platforms and services in their datacenters (DCs). The cloud initiative has been accompanied by the introduction of new computing paradigms, such as Infrastructure as a Service (IaaS) and Software as a Service (SaaS), which have dramatically reduced the time and costs required to develop and deploy a service. However, transport networks become crucial to make services accessible to the user and to operate DCs. Transport networks are currently configured with big static fat pipes based on capacity over-provisioning aiming at guaranteeing traffic demand and other parameters committed in Service Level Agreement (SLA) contracts. Notwithstanding, such over-dimensioning adds high operational costs for DC operators and service providers. Therefore, new mechanisms to provide reconfiguration and adaptability of the transport network to reduce the amount of over-provisioned bandwidth are required. Although cloud-ready transport network architecture was introduced to handle the dynamic cloud and network interaction and Elastic Optical Networks (EONs) can facilitate elastic network operations, orchestration between the cloud and the interconnection network is eventually required to coordinate resources in both strata in a coherent manner. In addition, the explosion of Internet Protocol (IP)-based services requiring not only dynamic cloud and network interaction, but also additional service-specific SLA parameters and the expected benefits of Network Functions Virtualization (NFV), open the opportunity to telecom operators to exploit that cloud-ready transport network and their current infrastructure, to efficiently satisfy network requirements from the services. In the telecom cloud, a pay-per-use model can be offered to support services requiring resources from the transport network and its infrastructure. In this thesis, we study connectivity requirements from representative cloud-based services and explore connectivity models, architectures and orchestration schemes to satisfy them aiming at facilitating the telecom cloud. The main objective of this thesis is demonstrating, by means of analytical models and simulation, the viability of orchestrating DCs and networks to facilitate the telecom cloud. To achieve the main goal we first study the connectivity requirements for DC interconnection and services on a number of scenarios that require connectivity from the transport network. Specifically, we focus on studying DC federations, live-TV distribution, and 5G mobile networks. Next, we study different connectivity schemes, algorithms, and architectures aiming at satisfying those connectivity requirements. In particular, we study polling-based models for dynamic inter-DC connectivity and propose a novel notification-based connectivity scheme where inter-DC connectivity can be delegated to the network operator. Additionally, we explore virtual network topology provisioning models to support services that require service-specific SLA parameters on the telecom cloud. Finally, we focus on studying DC and network orchestration to fulfill simultaneously SLA contracts for a set of customers requiring connectivity from the transport network.En la Internet de los servicios, los proveedores de recursos relacionados con tecnologías de la información juegan un papel crítico haciéndolos accesibles a los usuarios como servicios. Dichos proveedores, hospedan plataformas y servicios en centros de datos. La oferta plataformas y servicios en la nube ha introducido nuevos paradigmas de computación tales como ofrecer la infraestructura como servicio, conocido como IaaS de sus siglas en inglés, y el software como servicio, SaaS. La disponibilidad de recursos en la nube, ha contribuido a la reducción de tiempos y costes para desarrollar y desplegar un servicio. Sin embargo, para permitir el acceso de los usuarios a los servicios así como para operar los centros de datos, las redes de transporte resultan imprescindibles. Actualmente, las redes de transporte están configuradas con conexiones estáticas y su capacidad sobredimensionada para garantizar la demanda de tráfico así como los distintos parámetros relacionados con el nivel de servicio acordado. No obstante, debido a que el exceso de capacidad en las conexiones se traduce en un elevado coste tanto para los operadores de los centros de datos como para los proveedores de servicios, son necesarios nuevos mecanismos que permitan adaptar y reconfigurar la red de forma eficiente de acuerdo a las nuevas necesidades de los servicios a los que dan soporte. A pesar de la introducción de arquitecturas que permiten la gestión de redes de transporte y su interacción con los servicios en la nube de forma dinámica, y de la irrupción de las redes ópticas elásticas, la orquestación entre la nube y la red es necesaria para coordinar de forma coherente los recursos en los distintos estratos. Además, la explosión de servicios basados el Protocolo de Internet, IP, que requieren tanto interacción dinámica con la red como parámetros particulares en los niveles de servicio además de los habituales, así como los beneficios que se esperan de la virtualización de funciones de red, representan una oportunidad para los operadores de red para explotar sus recursos y su infraestructura. La nube de operador permite ofrecer recursos del operador de red a los servicios, de forma similar a un sistema basado en pago por uso. En esta Tesis, se estudian requisitos de conectividad de servicios basados en la nube y se exploran modelos de conectividad, arquitecturas y modelos de orquestación que contribuyan a la realización de la nube de operador. El objetivo principal de esta Tesis es demostrar la viabilidad de la orquestación de centros de datos y redes para facilitar la nube de operador, mediante modelos analíticos y simulaciones. Con el fin de cumplir dicho objetivo, primero estudiamos los requisitos de conectividad para la interconexión de centros de datos y servicios en distintos escenarios que requieren conectividad en la red de transporte. En particular, nos centramos en el estudio de escenarios basados en federaciones de centros de datos, distribución de televisión en directo y la evolución de las redes móviles hacia 5G. A continuación, estudiamos distintos modelos de conectividad, algoritmos y arquitecturas para satisfacer los requisitos de conectividad. Estudiamos modelos de conectividad basados en sondeos para la interconexión de centros de datos y proponemos un modelo basado en notificaciones donde la gestión de la conectividad entre centros de datos se delega al operador de red. Estudiamos la provisión de redes virtuales para soportar en la nube de operador servicios que requieren parámetros específicos en los acuerdos de nivel de servicio además de los habituales. Finalmente, nos centramos en el estudio de la orquestación de centros de datos y redes con el objetivo de satisfacer de forma simultánea requisitos para distintos servicios.Postprint (published version

    On Improving Efficiency of Data-Intensive Applications in Geo-Distributed Environments

    Get PDF
    Distributed systems are pervasively demanded and adopted in nowadays for processing data-intensive workloads since they greatly accelerate large-scale data processing with scalable parallelism and improved data locality. Traditional distributed systems initially targeted computing clusters but have since evolved to data centers with multiple clusters. These systems are mostly built on top of homogeneous, tightly integrated resources connected in high-speed local-area networks (LANs), and typically require data to be ingested to a central data center for processing. Today, with enormous volumes of data continuously generated from geographically distributed locations, direct adoption of such systems is prohibitively inefficient due to the limited system scalability and high cost for centralizing the geo-distributed data over the wide-area networks (WANs). More commonly, it becomes a trend to build geo-distributed systems wherein data processing jobs are performed on top of geo-distributed, heterogeneous resources in proximity to the data at vastly distributed geo-locations. However, critical challenges and mechanisms for efficient execution of data-intensive applications in such geo-distributed environments are unclear by far. The goal of this dissertation is to identify such challenges and mechanisms, by extensively using the research principles and methodology of conventional distributed systems to investigate the geo-distributed environment, and by developing new techniques to tackle these challenges and run data-intensive applications with efficiency at scale. The contributions of this dissertation are threefold. Firstly, the dissertation shows that the high level of resource heterogeneity exhibited in the geo-distributed environment undermines the scalability of geo-distributed systems. Virtualization-based resource abstraction mechanisms have been introduced to abstract the hardware, network, and OS resources throughout the system, to mitigate the underlying resource heterogeneity and enhance the system scalability. Secondly, the dissertation reveals the overwhelming performance and monetary cost incurred by indulgent data sharing over the WANs in geo-distributed systems. Network optimization approaches, including linear- programming-based global optimization, greedy bin-packing heuristics, and TCP enhancement, are developed to optimize the network resource utilization and circumvent unnecessary expenses imposed on data sharing in WANs. Lastly, the dissertation highlights the importance of data locality for data-intensive applications running in the geo-distributed environment. Novel data caching and locality-aware scheduling techniques are devised to improve the data locality.Doctor of Philosoph

    Coded-MPMC: One-to-Many Transfer Using Multipath Multicast With Sender Coding

    Get PDF
    One-to-many transfers in a fast and efficient manner are essential to meet the growing need for duplicating, migrating, or sharing bulk data among servers in a datacenter and across geographically distributed datacenters. Some existing works utilize multiple multicast trees for a one-to-many transfer request to increase network link utilization and its transfer throughput. However, since those schemes do not fully utilize the max-flow value of transmission from a single sender to each recipient, there is room for each recipient to retrieve data more quickly. Therefore, assuming fully-controlled networks with full-duplex links, we pose a problem to find a set of multicast flows with an allocation of block-wise transmissions by which each of multiple recipients with diverse max-flow values from the sender can utilize its own max-flow value. Based on that, assuming a sender-side coding capability on file blocks, we design a schedule of block transmissions over multiple phases by which each recipient can achieve a lower-bound of its file retrieval completion time, i.e., the file size divided by its own max-flow value. This paper presents the coded Multipath Multicast (Coded-MPMC) for one-to-many transfers with heuristic procedures to find a desired set of multicast flows on which block transmissions are scheduled. Through extensive simulations on large-scale real-world network topologies and different types of randomly-generated synthetic topologies, the proposed method is shown to design a desired schedule efficiently. A preliminary implementation on OpenFlow is also reported to show the fundamental feasibility of Coded-MPMC

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-
    • …
    corecore