49 research outputs found

    Torii: Multipath Distributed Ethernet Fabric Protocol for Data Centers with Zero-Loss Path Repair

    Get PDF
    This paper describes and evaluates Torii, a layer-two data center network fabric protocol. The main features of Torii are being fully distributed, scalable, fault-tolerant and with automatic setup. Torii is based on multiple, tree-based, topological MAC addresses that are used for table-free forwarding over multiple equal-cost paths, and it is capable of rerouting frames around failed links on the fly without needing a central fabric manager for any function. To the best of our knowledge, it is the first protocol that does not require the exchange of periodic messages to work under normal conditions and to recover from link failures, as Torii exchanges messages just once. Moreover, another important characteristic of Torii is that it is compatible with a wide range of data center topologies. Simulation results show an excellent distribution of traffic load and latencies, similar to shortest path protocols

    Torii: Multipath Distributed Ethernet Fabric Protocol for Data Centers with Zero-Loss Path Repair

    Get PDF
    This paper describes and evaluates Torii, a layer-two data center network fabric protocol. The main features of Torii are being fully distributed, scalable, fault-tolerant and with automatic setup. Torii is based on multiple, tree-based, topological MAC addresses that are used for table-free forwarding over multiple equal-cost paths, and it is capable of rerouting frames around failed links on the fly without needing a central fabric manager for any function. To the best of our knowledge, it is the first protocol that does not require the exchange of periodic messages to work under normal conditions and to recover from link failures, as Torii exchanges messages just once. Moreover, another important characteristic of Torii is that it is compatible with a wide range of data center topologies. Simulation results show an excellent distribution of traffic load and latencies, similar to shortest path protocols

    Experimental results on the use of genetic algorithms for scaling virtualized network functions

    Get PDF
    © 2015 IEEE.Network Function Virtualization (NFV) is bringing closer the possibility to truly migrate enterprise data centers into the cloud. However, for a Cloud Service Provider to offer such services, important questions include how and when to scale out/in resources to satisfy dynamic traffic/application demands. In previous work [1], we have proposed a platform called Network Function Center (NFC) to study research issues related to NFV and Network Functions (NFs). In a NFC, we assume NFs to be implemented on virtual machines that can be deployed in any server in the network. In this paper we present further experiments on the use of Genetic Algorithms (GAs) for scaling out/in NFs when the traffic changes dynamically. We combined data from previous empirical analyses [2], [3] to generate NF chains and for getting traffic patterns of a day and run simulations of resource allocation decision making. We have implemented different fitness functions with GA and compared their performance when scaling out/in over time

    Congestion control, energy efficiency and virtual machine placement for data centers

    Get PDF
    Data centers, facilities with communications network equipment and servers for data processing and/or storage, are prevalent and essential to provide a myriad of services and applications for various private, non-profit, and government systems, and they also form the foundation of cloud computing, which is transforming the technological landscape of the Internet. With rapid deployment of modern high-speed low-latency large-scale data centers, many issues have emerged in data centers, such as data center architecture design, congestion control, energy efficiency, virtual machine placement, and load balancing. The objective of this thesis is multi-fold. First, an enhanced Quantized Congestion Notification (QCN) congestion notification algorithm, called fair QCN (FQCN), is proposed to improve rate allocation fairness of multiple flows sharing one bottleneck link in data center networks. Detailed analysis on FQCN and simulation results is provided to validate the fair share rate allocation while maintaining the queue length stability. Furthermore, the effects of congestion notification algorithms, including QCN, AF-QCN and FQCN, are investigated with respect to TCP throughput collapse. The results show that FQCN can significantly enhance TCP throughput performance, and achieve better TCP throughput than QCN and AF-QCN in a TCP Incast setting. Second, a unified congestion detection, notification and control system for data center networks is designed to efficiently resolve network congestion in a uniform solution and to ensure convergence to statistical fairness with “no state” switches simultaneously. The architecture of the proposed system is described in detail and the FQCN algorithm is implemented in the proposed framework. The simulation results of the FQCN algorithm implemented in the proposed framework validate the robustness and efficiency of the proposed congestion control system. Third, a two-level power optimization model, namely, Hierarchical EneRgy Optimization (HERO), is established to reduce the power consumption of data center networks by switching off network switches and links while still guaranteeing full connectivity and maximizing link utilization. The power-saving performance of the proposed HERO model is evaluated by simulations with different traffic patterns. The simulation results have shown that HERO can reduce the power consumption of data center networks effectively with reduced complexity. Last, several heterogeneity aware dominant resource assistant heuristic algorithms, namely, dominant residual resource aware first-fit decreasing (DRR-FFD), individual DRR-FFD (iDRR-FFD) and dominant residual resource based bin fill (DRR-BinFill), are proposed for virtual machine (VM) consolidation. The proposed heuristic algorithms exploit the heterogeneity of the VMs’ requirements for different resources by capturing the differences among VMs’ demands, and the heterogeneity of the physical machines’ resource capacities by capturing the differences among physical machines’ resources. The performance of the proposed heuristic algorithms is evaluated with different classes of synthetic workloads under different VM requirement heterogeneity conditions, and the simulation results demonstrate that the proposed heuristics achieve quite similar consolidation performance as dimension-aware heuristics with almost the same computational cost as those of the single dimensional heuristics

    Designing Scalable Networks for Future Large Datacenters

    Get PDF
    Modern datacenters require a network with high cross-section bandwidth, fine-grained security, support for virtualization, and simple management that can scale to hundreds of thousands of hosts at low cost. This thesis first presents the firmware for Rain Man, a novel datacenter network architecture that meets these requirements, and then performs a general scalability study of the design space. The firmware for Rain Man, a scalable Software-Defined Networking architecture, employs novel algorithms and uses previously unused forwarding hardware. This allows Rain Man to scale at high performance to networks of forty thousand hosts on arbitrary network topologies. In the general scalability study of the design space of SDN architectures, this thesis identifies three different architectural dimensions common among the networks: source versus hop-by-hop routing, the granularity at which flows are routed, and arbitrary versus restrictive routing and finds that a source-routed, host-pair granularity network with arbitrary routes is the most scalable

    ExCCC-DCN: A Highly Scalable, Cost-Effective and Engergy-Efficient Data Center Stucture

    Get PDF
    PublishedThis is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Over the past decade, many data centers have been constructed around the world due to the explosive growth of data volume and type. The cost and energy consumption have become the most important challenges of building those data centers. Data centers today use commodity computers and switches instead of high-end servers and interconnections for cost-effectiveness. In this paper, we propose a new type of interconnection networks called Exchanged Cube-Connected Cycles (ExCCC). The ExCCC network is an extension of Exchanged Hypercube (EH) network by replacing each node with a cycle. The EH network is based on link removal from a Hypercube network, which makes the EH network more cost-effective as it scales up. After analyzing the topological properties of ExCCC, we employ commodity switches to construct a new class of data center network models, namely ExCCC-DCN, by leveraging the advantages of the ExCCC architecture. The analysis and experimental results demonstrate that the proposed ExCCC-DCN models significantly outperform four state-of-the-art data center network models in terms of the total cost, power consumption, scalability, and other static characteristics. It achieves the goals of low cost, low energy consumption, high network throughput, and high scalability simultaneously.This work is supported by the National Natural Science Foundation (NSF) of China under Grant (No. 61572232, and No. 61272073), the key program of Natural Science Foundation of Guangdong Province (No.S2013020012865), and the Fundamental Research Funds for the Central Universities

    Optimal Networks from Error Correcting Codes

    Full text link
    To address growth challenges facing large Data Centers and supercomputing clusters a new construction is presented for scalable, high throughput, low latency networks. The resulting networks require 1.5-5 times fewer switches, 2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly lower congestion and packet losses than the best present or proposed networks providing the same number of ports at the same total bisection. These advantage ratios increase with network size. The key new ingredient is the exact equivalence discovered between the problem of maximizing network bisection for large classes of practically interesting Cayley graphs and the problem of maximizing codeword distance for linear error correcting codes. Resulting translation recipe converts existent optimal error correcting codes into optimal throughput networks.Comment: 14 pages, accepted at ANCS 2013 conferenc

    Energy-aware service provisioning in P2P-assisted cloud ecosystems

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Instituto Tecnico de LisboaEnergy has been emerged as a first-class computing resource in modern systems. The trend has primarily led to the strong focus on reducing the energy consumption of data centers, coupled with the growing awareness of the adverse impact on the environment due to data centers. This has led to a strong focus on energy management for server class systems. In this work, we intend to address the energy-aware service provisioning in P2P-assisted cloud ecosystems, leveraging economics-inspired mechanisms. Toward this goal, we addressed a number of challenges. To frame an energy aware service provisioning mechanism in the P2P-assisted cloud, first, we need to compare the energy consumption of each individual service in P2P-cloud and data centers. However, in the procedure of decreasing the energy consumption of cloud services, we may be trapped with the performance violation. Therefore, we need to formulate a performance aware energy analysis metric, conceptualized across the service provisioning stack. We leverage this metric to derive energy analysis framework. Then, we sketch a framework to analyze the energy effectiveness in P2P-cloud and data center platforms to choose the right service platform, according to the performance and energy characteristics. This framework maps energy from the hardware oblivious, top level to the particular hardware setting in the bottom layer of the stack. Afterwards, we introduce an economics-inspired mechanism to increase the energy effectiveness in the P2P-assisted cloud platform as well as moving toward a greener ICT for ICT for a greener ecosystem.La energía se ha convertido en un recurso de computación de primera clase en los sistemas modernos. La tendencia ha dado lugar principalmente a un fuerte enfoque hacia la reducción del consumo de energía de los centros de datos, así como una creciente conciencia sobre los efectos ambientales negativos, producidos por los centros de datos. Esto ha llevado a un fuerte enfoque en la gestión de energía de los sistemas de tipo servidor. En este trabajo, se pretende hacer frente a la provisión de servicios de bajo consumo energético en los ecosistemas de la nube asistida por P2P, haciendo uso de mecanismos basados en economía. Con este objetivo, hemos abordado una serie de desafíos. Para instrumentar un mecanismo de servicio de aprovisionamiento de energía consciente en la nube asistida por P2P, en primer lugar, tenemos que comparar el consumo energético de cada servicio en la nube P2P y en los centros de datos. Sin embargo, en el procedimiento de disminuir el consumo de energía de los servicios en la nube, podemos quedar atrapados en el incumplimiento del rendimiento. Por lo tanto, tenemos que formular una métrica, sobre el rendimiento energético, a través de la pila de servicio de aprovisionamiento. Nos aprovechamos de esta métrica para derivar un marco de análisis de energía. Luego, se esboza un marco para analizar la eficacia energética en la nube asistida por P2P y en la plataforma de centros de datos para elegir la plataforma de servicios adecuada, de acuerdo con las características de rendimiento y energía. Este marco mapea la energía desde el alto nivel independiente del hardware a la configuración de hardware particular en la capa inferior de la pila. Posteriormente, se introduce un mecanismo basado en economía para aumentar la eficacia energética en la plataforma en la nube asistida por P2P, así como avanzar hacia unas TIC más verdes, para las TIC en un ecosistema más verde.Postprint (published version
    corecore