196 research outputs found

    On the benefits of resource disaggregation for virtual data centre provisioning in optical data centres

    Get PDF
    Virtual Data Centre (VDC) allocation requires the provisioning of both computing and network resources. Their joint provisioning allows for an optimal utilization of the physical Data Centre (DC) infrastructure resources. However, traditional DCs can suffer from computing resource underutilization due to the rigid capacity configurations of the server units, resulting in high computing resource fragmentation across the DC servers. To overcome these limitations, the disaggregated DC paradigm has been recently introduced. Thanks to resource disaggregation, it is possible to allocate the exact amount of resources needed to provision a VDC instance. In this paper, we focus on the static planning of a shared optically interconnected disaggregated DC infrastructure to support a known set of VDC instances to be deployed on top. To this end, we provide optimal and sub-optimal techniques to determine the necessary capacity (both in terms of computing and network resources) required to support the expected set of VDC demands. Next, we quantitatively evaluate the benefits yielded by the disaggregated DC paradigm in front of traditional DC architectures, considering various VDC profiles and Data Centre Network (DCN) topologies.Peer ReviewedPostprint (author's final draft

    Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis

    Get PDF
    Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66\% and 80\%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper will be presented during the IEEE International Conference on High Performance Computing and Communications in Bangkok, Thailand. 18 - 20 December, 2017. To be published in the conference proceeding

    An Adaptable Optimal Network Topology Model for Efficient Data Centre Design in Storage Area Networks

    Get PDF
    In this research, we look at how different network topologies affect the energy consumption of modular data centre (DC) setups. We use a combined-input directed approach to assess the benefits of rack-scale and pod-scale fragmentation across a variety of electrical, optoelectronic, and composite network architectures in comparison to a conventional DC. When the optical transport architecture is implemented and the appropriate resource components are distributed, the findings reveal fragmentation at the layer level is adequate, even compared to a pod-scale DC. Composable DCs can operate at peak efficiency because of the optical network topology. Logical separation of conventional DC servers across an optical network architecture is also investigated in this article. When compared to physical decentralisation at the rack size, logical decomposition of data centers inside each rack offers a small decrease in the overall DC energy usage thanks to better resource needs allocation. This allows for a flexible, composable architecture that can accommodate performance based in-memory applications. Moreover, we look at the state of fundamentalmodel and its use in both static and dynamic data centres. According to our findings, typical DCs become more energy efficient when workload modularity increases, although excessive resource use still exists. By enabling optimal resource use and energy savings, disaggregation and micro-services were able to reduce the typical DC's up to 30%. Furthermore, we offer a heuristic to duplicate the Mixed integer model's output trends for energy-efficient allocation of caseloads in modularized DCs

    Disaggregated Servers for Future Energy Efficient Data Centres

    Get PDF
    The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today’s data centres can no longer support. This thesis examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm known as disaggregated server (DS). The DS design arranges data centres resources in physical pools such as processing, memory and IO module pools; rather than packing each subset in a single server. In this work, we study energy efficient resource provisioning and virtual machine (VM) allocation in the DS based data centres compared to CS based data centres. First, we developed a mixed integer linear programming (MILP) model to optimise VM allocation for DS based data centre. Our results indicate that considering pooled resources yields up to 62% total saving in power consumption compared to the CS approach. Due to the MILP high computational complexity, we developed an energy efficient, fast and scalable resource provisioning heuristic (EERP-DS), based on the MILP insights, with comparable power efficiency to the MILP. Second, we extended the resources provisioning and VM allocation MILP to include the data centre communication fabric power consumption. The results show that the inclusion of the communication fabric still yields considerable power savings compared to the CS approach, up to 48% power saving. Third, we developed an energy efficient resource provisioning for DS with communication fabric heuristic (EERP-DSCF). EERP-DSCF achieved comparable results to the second MILP and with it we can extend the number of served VMs where the MILP scalability for big number of VMs is challenging. Finally, we present our new design for the photonic DS based data centre architecture supplemented with a complete description of the architecture components, communication patterns and some recommendations for the design implementation challenges

    On the complexity of configuration and orchestration for enabling disaggregated server provisioning in optical composable data centers

    Get PDF
    Due to the limitations of traditional data center (DC) architectures, the concept of infrastructure disaggregation has been proposed. DC resources are separated into multiple blades to be exploited independently. As a result, composable DC (CDC) infrastructures are achieved, enhancing the modularity of resource provisioning. However, disaggregation introduces additional challenges that need to be carefully analyzed. One relates to the potential complexity increase on the orchestration and infrastructure configuration that need to be performed when provisioning resources to support services. This aspect is highly influenced by the distribution of resources at the physical infrastructure. As such, when analyzing the performance of a CDC, it becomes essential to also study the related operational complexity of the resource orchestration and configuration phases. Furthermore, the requirements of several tenant services may impose heterogeneous deployments over the shared physical infrastructure in the form of either disaggregated single-server or multi-server distributions. The associated orchestration/configuration cost is again highly influenced by the data plane architecture of the CDC. With these aspects in mind, in this paper, we provide a methodology for analysis of the complexity of resource orchestration for a service deployment and the associated configuration cost in optical CDCs, considering various service deployment setups. A selected set of CDC architectures found in the literature is employed to quantitatively illustrate how the data plane design and service deployment strategies affect the complexity of infrastructure configuration and resource orchestration.This work has been supported by the Spanish Government through project TRAINER-B (PID2020-118011GB-C22) with FEDER contribution.Peer ReviewedPostprint (author's final draft

    Development of a secure monitoring framework for optical disaggregated data centres

    Get PDF
    Data center (DC) infrastructures are a key piece of nowadays telecom and cloud services delivery, enabling the access and storage of enormous quantities of information as well as the execution of complex applications and services. Such aspect is being accentuated with the advent of 5G and beyond architectures, since a significant portion of the network and service functions are being deployed as specialized virtual elements inside dedicated DC infrastructures. As such, the development of new architectures to better exploit the resources of DC becomes of paramount importanceThe mismatch between the variability of resources required by running applications and the fixed amount of resources in server units severely limits resource utilization in today's Data Centers (DCs). The Disaggregated DC (DDC) paradigm was recently introduced to address these limitations. The main idea behind DDCs is to divide the various computational resources into independent hardware modules/blades, which are mounted in racks, bringing greater modularity and allowing operators to optimize their deployments for improved efficiency and performance, thus, offering high resource allocation flexibility. Moreover, to efficiently exploit the hardware blades and establish the connections across them according to upper layer requirements, a flexible control and management framework is required. In this regard, following current industrial trends, the Software Defined Networking (SDN) paradigm is one of the leading technologies for the control of DC infrastructures, allowing for the establishment of high-speed, low-latency optical connections between hardware components in DDCs in response to the demands of higher-level services and applications. With these concepts in mind, the primary objective of this thesis is to design and carry out the implementation of the control of a DDC infrastructure layer that is founded on the SDN principles and makes use of optical technologies for the intra-DC network fabric, highlighting the importance of quality control and monitoring. Thanks to several SDN agents, it becomes possible to gather statistics and metrics from the multiple infrastructure elements (computational blades and network equipment), allowing DC operators to monitor and make informed decisions on how to utilize the infrastructure resources to the greatest extent feasible. Indeed, quality assurance operations are of capital importance in modern DC infrastructures, thus, it becomes essential to guarantee a secure communication channel for gathering infrastructure metrics/statistics and enforcing (re-)configurations, closing the full loop, then addressing the security layer to secure the communication channel by encryption and providing authentication for the server and the client

    Future Energy Efficient Data Centers With Disaggregated Servers

    Get PDF
    The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today's data centers can no longer support. This paper examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm: the disaggregated server (DS) data center design. The DS design arranges data centers resources in physical pools, such as processing, memory, and IO module pools, rather than packing each subset of such resources into a single server box. In this paper, we study energy efficient resource provisioning and virtual machine (VM) allocation in DS-based data centers compared to CS-based data centers. First, we present our new design for the photonic DS-based data center architecture, supplemented with a complete description of the architectural components. Second, we develop a mixed integer linear programming (MILP) model to optimize VM allocation for the DS-based data center, including the data center communication fabric power consumption. Our results indicate that, in DS data centers, the optimum allocation of pooled resources and their communication power yields up to 42% average savings in total power consumption when compared with the CS approach. Due to the MILP high computational complexity, we developed an energy efficient resource provisioning heuristic for DS with communication fabric (EERP-DSCF), based on the MILP model insights, with comparable power efficiency to the MILP model. With EERP-DSCF, we can extend the number of served VMs, where the MILP model scalability for a large number of VMs is challenging. Furthermore, we assess the energy efficiency of the DS design under stringent conditions by increasing the CPU to memory traffic and by including high noncommunication power consumption to determine the conditions at which the DS and CS designs become comparable in power consumption. Finally, we present a complete analysis of the communication patterns in our new DS design and some recommendations for design and implementation challenges
    • …
    corecore