940 research outputs found

    Neural network-assisted decision-making for adaptive routing strategy in optical datacenter networks

    Get PDF
    To improve the blocking probability (BP) performance and enhance the resource utilization, a correct decision of routing strategy which is most adaptable to the network configuration and traffic dynamics is essential for adaptive routing in optical datacenter networks (DCNs). A neural network (NN)-assisted decision-making scheme is proposed to find the optimal routing strategy in optical DCNs by predicting the BP performance for various candidate routing strategies. The features of an optical DCN architecture (i.e., the rack number N, connection degree D, spectral slot number S and optical transceiver number M) and the traffic pattern (i.e., the ratio of requests of various capacities R, and the load of arriving request) are used as the input to the NN to estimate the optimal routing strategy. A case of two-strategy decision in the transparent optical multi-hop interconnected DCN is studied. Three metrics are defined for performance evaluation, which include (a) the ratio of the load range with wrong decision over the whole load range of interest (i.e., decision error E), (b) the maximum BP loss (BPL) and (c) the resource utilization loss (UL) caused by the wrong decision. Numerical results show that the ratio of error-free cases over tested cases always surpasses 83% and the average values of E, BPL and UL are less than 3.0%, 4.0% and 1.2%, respectively, which implies the high accuracy of the proposed scheme. The results validate the feasibility of the proposed scheme which facilitates the autonomous implementation of adaptive routing in optical DCNs

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying users’ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy efficiency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast

    Optical Technologies and Control Methods for Scalable Data Centre Networks

    Get PDF
    Attributing to the increasing adoption of cloud services, video services and associated machine learning applications, the traffic demand inside data centers is increasing exponentially, which necessitates an innovated networking infrastructure with high scalability and cost-efficiency. As a promising candidate to provide high capacity, low latency, cost-effective and scalable interconnections, optical technologies have been introduced to data center networks (DCNs) for approximately a decade. To further improve the DCN performance to meet the increasing traffic demand by using photonic technologies, two current trends are a)increasing the bandwidth density of the transmission links and b) maximizing IT and network resources utilization through disaggregated topologies and architectures. Therefore, this PhD thesis focuses on introducing and applying advanced and efficient technologies in these two fields to DCNs to improve their performance. On the one hand, at the link level, since the traditional single-mode fiber (SMF) solutions based on wavelength division multiplexing (WDM) over C+L band may fall short in satisfying the capacity, front panel density, power consumption, and cost requirements of high-performance DCNs, a space division multiplexing (SDM) based DCN using homogeneous multi-core fibers (MCFs) is proposed.With the exploited bi-directional model and proposed spectrum allocation algorithms, the proposed DCN shows great benefits over the SMF solution in terms of network capacity and spatial efficiency. In the meanwhile, it is found that the inter-core crosstalk (IC-XT) between the adjacent cores inside the MCF is dynamic rather than static, therefore, the behaviour of the IC-XT is experimentally investigated under different transmission conditions. On the other hand, an optically disaggregated DCN is developed and to ensure the performance of it, different architectures, topologies, resource routing and allocation algorithms are proposed and compared. Compared to the traditional server-based DCN, the resource utilization, scalability and the cost-efficiency are significantly improved

    Silicon photonic switching: from building block design to intelligent control

    Get PDF
    The rapid growth in data communication technologies is at the heart of enriching the digital experiences for people around the world. Encoding high bandwidth data to the optical domain has drastically changed the bandwidth-distance trade-off imposed by electrical media. Silicon photonics, sharing the technological maturity of the semiconductor industry, is a platform poised to make optical interconnect components more robust, manufacturable, and ubiquitous. One of the most prominent device classes enabled by the silicon photonics platform is photonic switching, which describes the direct routing of optical signal carriers without the optical-electrical-optical conversions. While theoretical designs and prototypes of monolithic silicon photonic switch devices have been studied, realizing high-performance and feasible switch systems requires explorations of all design aspects from basic building blocks to control systems. This thesis provides a holistic collection of studies on silicon photonic switching in topics of novel switching element designs, multi-stage switch architectures, device calibration, topology scalability, smart routing strategies, and performance-aware control plane. First, component designs for assembling a silicon photonic switch device are presented. Structures that perform 2×2 optical switching functions are introduced. To realize switching granularities in both spatial and spectral domains, a resonator-assisted Mach-Zehnder interferometer design is demonstrated with high performance and design robustness. Next, multi-stage monolithic switching devices with microring resonator-based switching elements are investigated. An 8×8 switch device with dual-microring switching elements is presented with a well-balanced set of performance metrics in extinction ratio, crosstalk suppression, and optical bandwidth. Continued scaling in the switch port count requires both an economic increase in the number of switching elements integrated in a device and the preservation of signal quality through the switch fabric. A highly scalable switch architecture based on Clos network with microring switch-and-select sub-switches is presented as a solution to reach high switch radices while addressing key factors of insertion loss, crosstalk, and optical passband to ensure end-to-end switching performance. The thesis then explores calibration techniques to acquire and optimize system-wide control points for integrated silicon switch devices. Applicable to common rearrangeably non-blocking switch topologies, automated procedures are developed to calibrate entire switch devices without the need for built-in power monitors. Using Mach-Zehnder interferometer-based switching elements as a demonstration, calibration techniques for optimal control points are introduced to achieve balanced push-pull drive scheme and reduced crosstalk in switching operations. Furthermore, smart routing strategies are developed based on optical penalty estimations enabled by expedited lightpath characterization procedures. Leveraging configuration redundancies in the switch fabric, the routing strategies are capable of avoiding the worst penalty optical paths and effectively elevate the bottom-line performance of the switch device. Additional works are also presented on enhancing optical system control planes with machine learning techniques to accurately characterize complex systems and identify critical control parameters. Using flexgrid networks as a case study, light-weight machine learning workflows are tailored to devise control strategies for improving spectral power stability during wavelength assignment and defragmentation. This work affirms the efficacy of intelligent control planes to predict system dynamics and drive performance optimizations for optical interconnect systems

    Energy Saving and Virtualization Technologies in Switching

    Get PDF
    Switching is the key functionality for many devices like electronic Router and Switch, optical Router, Network on Chips (NoCs) and so on. Basically, switching is responsible for moving data unit from one port/location to another (or multiple) port(s)/location(s). In past years, the high capacity, low delay were the main concerns when designing high-end switching unit. As new demands, requests and technologies emerge, flexibility and low power cost switching design become to weight the same as throughput and delay. On one hand, highly flexible (i.e, programming ability) switching can cope with variable needs stem from new applications (i.e, VoIP) and popular user behavior (i.e, p2p downloading); on the other hand, reduce the energy and power dissipation for switching could not only save bills and build echo system but also expand components life time. Many research efforts have been devoted to increase switching flexibility and reduce its power cost. In this thesis work, we consider to exploit virtualization as the main technique to build flexible software router in the first part, then in the second part we draw our attention on energy saving in NoC (i.e, a switching fabric designed to handle the on chip data transmission) and software router. In the first part of the thesis, we consider the virtualization inside Software Routers (SRs). SR, i.e, routers running in commodity Personal Computers (PCs), become an appealing solution compared to traditional Proprietary Routing Devices (PRD) for various reasons such as cost (the multi-vendor hardware used by SRs can be cheap, while the equipment needed by PRDs is more expensive and their training cost is higher), openness (SRs can make use of a large number of open source networking applications, while PRDs are more closed) and flexibility. The forwarding performance provided by SRs has been an obstacle to their deployment in real networks. For this reason, we proposed to aggregate multiple routing units that form an powerful SR known as the Multistage Software Router (MSR) to overcome the performance limitation for a single SR. Our results show that the throughput can increase almost linearly as the number of the internal routing devices. But some other features related to flexibility (such as power saving, programmability, router migration or easy management) have been investigated less than performance previously. We noticed that virtualization techniques become reality thanks to the quick development of the PC architectures, which are now able to easily support several logical PCs running in parallel on the same hardware. Virtualization could provide many flexible features like hardware and software decoupling, encapsulation of virtual machine state, failure recovery and security, to name a few. Virtualization permits to build multiple SRs inside one physical host and a multistage architecture exploiting only logical devices. By doing so, physical resources can be used in a more efficient way, energy savings features (switching on and off device when needed) can be introduced and logical resources could be rented on-demand instead of being owned. Since virtualization techniques are still difficult to deploy, several challenges need to be faced when trying to integrate them into routers. The main aim of the first part in this thesis is to find out the feasibility of the virtualization approach, to build and test virtualized SR (VSR), to implement the MSR exploiting logical, i.e. virtualized, resources, to analyze virtualized routing performance and to propose improvement techniques to VSR and virtual MSR (VMSR). More specifically, we considered different virtualization solutions like VMware, XEN, KVM to build VSR and VMSR, being VMware a closed source solution but with higher performance and XEN/KVM open source solutions. Firstly we built and tested each single component of our multistage architecture (i.e, back-end router, load balancer )inside the virtual infrastructure, then and we extended the performance experiments with more complex scenarios like multiple Back-end Router (BR) or Load Balancer (LB) which cooperate to route packets. Our results show that virtualization could introduce 40~\% performance penalty compare with the hardware only solution. Keep the performance limitation in mind, we developed the whole VMSR and we obtained low throughput with 64B packet flow as expected. To increase the VMSR throughput, two directions could be considered, the first one is to improve the single component ( i.e, VSR) performance and the other is to work from the topology (i.e, best allocation of the VMs into the hardware ) point of view. For the first method, we considered to tune the VSR inside the KVM and we studied closely such as Linux driver, scheduler, interconnect methodology which could impact the performance significantly with proper configuration; then we proposed two ways for the VMs allocation into physical servers to enhance the VMSR performance. Our results show that with good tuning and allocation of VMs, we could minimize the virtualization penalty and get reasonable throughput for running SRs inside virtual infrastructure and add flexibility functionalities into SRs easily. In the second part of the thesis, we consider the energy efficient switching design problem and we focus on two main architecture, the NoC and MSR. As many research works suggest, the energy cost in the Communication Technologies ( ICT ) is constantly increasing. Among the main ICT sectors, a large portion of the energy consumption is contributed by the telecommunication infrastructure and their devices, i.e, router, switch, cell phone, ip TV settle box, storage home gateway etc. More in detail, the linecards, links, System on Chip (SoC) including the transmitter/receiver on these variate devices are the main power consuming units. We firstly present the work on the power reduction of the data transmission in SoC, which is carried out by the NoC. NoC is an approach to design the communication subsystem between different Processing Units (PEs) in a SoC. PEs could be different elements such as CPU, memory, digital signal/analog signal processor etc. Different PEs performs specific tasks depending on the applications running on the chip. Different tasks need to exchange data information among each other, thus flits ( chopped packet with limited header information ) are generated by PEs. The flits are injected into the NoC by the proper interface and routed until reach the destination PEs. For the whole procedure, the NoC behaves as a packet switch network. Studies show that in general the information processing in the PEs only consume 60~\% energy while the remaining 40~\% are consumed by the NoC. More importantly, as the current network designing principle, the NoC capacity is devised to handle the peak load. This is a clear sign for energy saving when the network load is low. In our work, we considered to exploit Dynamic Voltage and Frequency Scaling (DVFS) technique, which can jointly decrease or increase the system voltage and frequency when necessary, i.e, decrease the voltage and frequency at low load scenario to save energy and reduce power dissipation. More precisely, we studied two different NoC architectures for energy saving, namely single plane chip and multi-plane chip architecture. In both cases we have a very strict constraint to be that all the links and transmitter/receivers on the same plane work at the same frequency/voltage to avoid synchronization problem. This is the main difference with many existing works in the literature which usually assume different links can work at different frequency, that is hard to be implemented in reality. For the single plane NoC, we exploited different routing schemas combined with DVFS to reduce the power for the whole chip. Our results haven been compared with the optimal value obtained by modeling the power saving formally as a quadratic programming problem. Results suggest that just by using simple load balancing routing algorithm, we can save considerable energy for the single chip NoC architecture. Furthermore, we noticed that in the single plane NoC architecture, the bottleneck link could limit the DVFS effectiveness. Then we discovered that multiplane NoC architecture is fairly easy to be implemented and it could help with the energy saving. Thus we focus on the multiplane architecture and we found out that DVFS could be more efficient when we concentrate more traffic into one plane and send the remaining flows to other planes. We compared load concentration and load balancing with different power modeling and all simulation results show that load concentration is better compared with load balancing for multiplan NoC architecture. Finally, we also present one of the the energy efficient MSR design technique, which permits the MSR to follow the day-night traffic pattern more efficiently with our on-line energy saving algorithm
    corecore