149 research outputs found

    A Network Topology for Composable Infrastructures

    Get PDF
    This paper proposes a passive optical backplane as a new network topology for composable computing infrastructures. The topology provides a high capacity, low-latency and flexible fabric that interconnects disaggregated resource components. The network topology is dedicated to inter-resource communication between composed logical hosts to ensure effective performance. We formulated a mixed integer linear programming (MILP) model that dynamically creates logical networks to support intra logical host communication over the physical network topology. The MILP performs energy efficient logical network instantiation given each application's resource demand. The topology can achieve 1 Tbps capacity per resource node given appropriate wavelength transmission data rate and the right number of wavelengths per node

    Energy Efficient Placement of Workloads in Composable Data Center Networks

    Get PDF
    This paper studies the energy efficiency of composable datacentre (DC) infrastructures over network topologies. Using a mixed integer linear programming (MILP) model, we compare the performance of disaggregation at rack-scale and pod-scale over selected electrical, optical and hybrid network topologies relative to a traditional DC. Relative to a pod-scale DC, the results show that physical disaggregation at rack-scale is sufficient for optimal efficiency when the optical network topology is adopted, and resource components are allocated in a suitable manner. The optical network topology also enables optimal energy efficiency in composable DCs. The paper also studies logical disaggregation of traditional DC servers over an optical network topology. Relative to physical disaggregation at rack-scale, logical disaggregation of server resources within each rack enables marginal fall in the total DC power consumption (TDPC) due to improved resource demands placement. Hence, an adaptable composable infrastructure that can support both in memory (access) latency sensitive and insensitive workloads is enabled. We also conduct a study of the adoption of micro-service architecture in both traditional and composable DCs. Our results show that increasing the modularity of workloads improves the energy efficiency in traditional DCs, but disproportionate utilization of DC resources persists. A combination of disaggregation and micro-services achieved up to 23% reduction in the TDPC of the traditional DC by enabling optimal resources utilization and energy efficiencies. Finally, we propose a heuristic for energy efficient placement of workloads in composable DCs which replicates the trends produced by the MILP model formulated in this paper

    Development of a secure monitoring framework for optical disaggregated data centres

    Get PDF
    Data center (DC) infrastructures are a key piece of nowadays telecom and cloud services delivery, enabling the access and storage of enormous quantities of information as well as the execution of complex applications and services. Such aspect is being accentuated with the advent of 5G and beyond architectures, since a significant portion of the network and service functions are being deployed as specialized virtual elements inside dedicated DC infrastructures. As such, the development of new architectures to better exploit the resources of DC becomes of paramount importanceThe mismatch between the variability of resources required by running applications and the fixed amount of resources in server units severely limits resource utilization in today's Data Centers (DCs). The Disaggregated DC (DDC) paradigm was recently introduced to address these limitations. The main idea behind DDCs is to divide the various computational resources into independent hardware modules/blades, which are mounted in racks, bringing greater modularity and allowing operators to optimize their deployments for improved efficiency and performance, thus, offering high resource allocation flexibility. Moreover, to efficiently exploit the hardware blades and establish the connections across them according to upper layer requirements, a flexible control and management framework is required. In this regard, following current industrial trends, the Software Defined Networking (SDN) paradigm is one of the leading technologies for the control of DC infrastructures, allowing for the establishment of high-speed, low-latency optical connections between hardware components in DDCs in response to the demands of higher-level services and applications. With these concepts in mind, the primary objective of this thesis is to design and carry out the implementation of the control of a DDC infrastructure layer that is founded on the SDN principles and makes use of optical technologies for the intra-DC network fabric, highlighting the importance of quality control and monitoring. Thanks to several SDN agents, it becomes possible to gather statistics and metrics from the multiple infrastructure elements (computational blades and network equipment), allowing DC operators to monitor and make informed decisions on how to utilize the infrastructure resources to the greatest extent feasible. Indeed, quality assurance operations are of capital importance in modern DC infrastructures, thus, it becomes essential to guarantee a secure communication channel for gathering infrastructure metrics/statistics and enforcing (re-)configurations, closing the full loop, then addressing the security layer to secure the communication channel by encryption and providing authentication for the server and the client

    MONet: Heterogeneous Memory over Optical Network for Large-Scale Data Centre Resource Disaggregation

    Get PDF
    Memory over Optical Network (MONet) system is a disaggregated data center architecture where serial (HMC) / parallel (DDR4) memory resources can be accessed over optically switched interconnects within and between racks. An FPGA/ASIC-based custom hardware IP (ReMAT) supports heterogeneous memory pools, accommodates optical-to-electrical conversion for remote access, performs the required serial/parallel conversion and hosts the necessary local memory controller. Optically interconnected HMC-based (serial I/O type) memory card is accessed by a memory controller embedded in the compute card, simplifying the hardware near the memory modules. This substantially reduces overheads on latency, cost, power consumption and space. We characterize CPU-memory performance, by experimentally demonstrating the impact of distance, number of switching hops, transceivers, channel bonding and bit-rate per transceiver on bit-error rate, power consumption, additional latency, sustained remote memory bandwidth/throughput (using industry standard benchmark STREAMS) and cloud workload performance (such as operations per second, average added latency and retired instructions per second on memcached with YCSB cloud workloads). MONet pushes the CPU-memory operational limit from a few centimetres to 10s of metres, yet applications can experience as low as 10% performance penalty (at 36m) compared to a direct-attached equivalent. Using the proposed parallel topology, a system can support up to 100,000 disaggregated cards

    Photonics based perfect secrecy cryptography : toward fully classical implementations

    Get PDF
    Funding: A.D.F. acknowledges support from UK EPSRC (EP/L017008/1).Developing an unbreakable cryptography is a longstanding question and a global challenge in the internet era. Photonics technologies are at the frontline of research, aiming at providing the ultimate system capable of ending the cybercrime industry by changing the way information is treated and protected now and in the long run. Such perspective discusses some of the current challenges as well as opportunities that classical and quantum systems open in the field of cryptography as both a science and an engineering.PostprintPeer reviewe

    Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence

    Get PDF
    Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches O(106)\mathbf{O}(10^6) servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these. However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies. This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in 500ps\approx 500 ps and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where 3×3\times less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in O(103)s\mathbf{O}(10^{-3}) s. This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved >20%>20\% with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Hardware accelerator design for data centers

    Get PDF
    As the size of available data is increasing, it is becoming inefficient to scale the computational power of traditional systems. To overcome this problem, customized application-specific accelerators are becoming integral parts of modern system on chip (SOC) architectures. In this paper, we summarize existing hardware accelerators for data centers and discuss the techniques to implement and embed them along with the existing SOCs. © 2015 IEEE
    corecore