48 research outputs found

    Exploring Wireless Data Center Networks: Can They Reduce Energy Consumption While Providing Secure Connections?

    Get PDF
    Data centers have become the digital backbone of the modern world. To support the growing demands on bandwidth, Data Centers consume an increasing amount of power. A significant portion of that power is consumed by information technology (IT) equipment, including servers and networking components. Additionally, the complex cabling in traditional data centers poses design and maintenance challenges and increases the energy cost of the cooling infrastructure by obstructing the flow of chilled air. Hence, to reduce the power consumption of the data centers, we proposed a wireless server-to-server data center network architecture using millimeter-wave links to eliminate the need for power-hungry switching fabric of traditional fat-tree-based data center networks. The server-to-server wireless data center network (S2S-WiDCN) architecture requires Line-of-Sight (LoS) between servers to establish direct communication links. However, in the presence of interference from internal or external sources, or an obstruction, such as an IT technician, the LoS may be blocked. To address this issue, we also propose a novel obstruction-aware adaptive routing algorithm for S2S-WiDCN. S2S-WiDCN can reduce the power consumption of the data center network portion while not affecting the power consumption of the servers in the data center, which contributes significantly towards the total power consumption of the data center. Moreover, servers in data centers are almost always underutilized due to over-provisioning, which contributes heavily toward the high-power consumption of the data centers. To address the high power consumption of the servers, we proposed a network-aware bandwidth-constrained server consolidation algorithm called Network-Aware Server Consolidation (NASCon) for wireless data centers that can reduce the power consumption up to 37% while improving the network performance. However, due to the arrival of new tasks and the completion of existing tasks, the consolidated utilization profile of servers change, which may have an adverse effect on overall power consumption over time. To overcome this, NASCon algorithm needs to be executed periodically. We have proposed a mathematical model to estimate the optimal inter-consolidation time, which can be used by the data center resource management unit for scheduling NASCon consolidation operation in real-time and leverage the benefits of server consolidation. However, in any data center environment ensuring security is one of the highest design priorities. Hence, for S2S-WiDCN to become a practical and viable solution for data center network design, the security of the network has to be ensured. S2S-WiDCN data center can be vulnerable to a variety of different attacks as it uses wireless links over an unguided channel for communication. As being a wireless system, the network has to be secured against common threats associated with any wireless networks such as eavesdropping attack, denial of services attack, and jamming attack. In parallel, other security threats such as the attack on the control plane, side-channel attack through traffic analysis are also possible. We have done an extensive study to elaborate the scope of these attacks as well as explore probable solutions against these issues. We also proposed viable solutions for the attack against eavesdropping, denial of services, jamming, and control-plane attack. To address the traffic analysis attack, we proposed a simulated annealing-based random routing mechanism which can be adopted instead of default routing in the wireless data center

    On energy consumption of switch-centric data center networks

    Get PDF
    Data center network (DCN) is the core of cloud computing and accounts for 40% energy spend when compared to cooling system, power distribution and conversion of the whole data center (DC) facility. It is essential to reduce the energy consumption of DCN to esnure energy-efficient (green) data center can be achieved. An analysis of DC performance and efficiency emphasizing the effect of bandwidth provisioning and throughput on energy proportionality of two most common switch-centric DCN topologies: three-tier (3T) and fat tree (FT) based on the amount of actual energy that is turned into computing power are presented. Energy consumption of switch-centric DCNs by realistic simulations is analyzed using GreenCloud simulator. Power related metrics were derived and adapted for the information technology equipment (ITE) processes within the DCN. These metrics are acknowledged as subset of the major metrics of power usage effectiveness (PUE) and data center infrastructure efficiency (DCIE), known to DCs. This study suggests that despite in overall FT consumes more energy, it spends less energy for transmission of a single bit of information, outperforming 3T

    Power-Aware Datacenter Networking and Optimization

    Get PDF
    Present-day datacenter networks (DCNs) are designed to achieve full bisection bandwidth in order to provide high network throughput and server agility. However, the average utilization of typical DCN infrastructure is below 10% for significant time intervals. As a result, energy is wasted during these periods. In this thesis we analyze traffic behavior of datacenter networks using traces as well as simulated models. Based on the insight developed, we present techniques to reduce energy waste by making energy use scale linearly with load. The solutions developed are analyzed via simulations, formal analysis, and prototyping. The impact of our work is significant because the energy savings we obtain for networking infrastructure of DCNs are near optimal. A key finding of our traffic analysis is that network switch ports within the DCN are grossly under-utilized. Therefore, the first solution we study is to modify the routing within the network to force most traffic to the smallest of switches. This increases the hop count for the traffic but enables the powering off of many switch ports. The exact extent of energy savings is derived and validated using simulations. An alternative strategy we explore in this context is to replace about half the switches with fewer switches that have higher port density. This has the effect of enabling even greater traffic consolidation, thus enabling even more ports to sleep. Finally, we explore a third approach in which we begin with end-to-end traffic models and incrementally build a DCN topology that is optimized for that model. In other words, the network topology is optimized for the potential use of the datacenter. This approach makes sense because, as other researchers have observed, the traffic in a datacenter is heavily dependent on the primary use of the datacenter. A second line of research we undertake is to merge traffic in the analog domain prior to feeding it to switches. This is accomplished by use of a passive device we call a merge network. Using a merge network enables us to attain linear scaling of energy use with load regardless of datacenter traffic models. The challenge in using such a device is that layer 2 and layer 3 protocols require a one-to-one mapping of hardware addresses to IP (Internet Protocol) addresses. We overcome this problem by building a software shim layer that hides the fact that traffic is being merged. In order to validate the idea of a merge network, we build a simple mere network for gigabit optical interfaces and demonstrate correct operation at line speeds of layer 2 and layer 3 protocols. We also conducted measurements to study how traffic gets mixed in the merge network prior to being fed to the switch. We also show that the merge network uses only a fraction of a watt of power, which makes this a very attractive solution for energy efficiency. In this research we have developed solutions that enable linear scaling of energy with load in datacenter networks. The different techniques developed have been analyzed via modeling and simulations as well as prototyping. We believe that these solutions can be easily incorporated into future DCNs with little effort

    A survey on architectures and energy efficiency in Data Center Networks

    Get PDF
    Data Center Networks (DCNs) are attracting growing interest from both academia and industry to keep pace with the exponential growth in cloud computing and enterprise networks. Modern DCNs are facing two main challenges of scalability and cost-effectiveness. The architecture of a DCN directly impacts on its scalability, while its cost is largely driven by its power consumption. In this paper, we conduct a detailed survey of the most recent advances and research activities in DCNs, with a special focus on the architectural evolution of DCNs and their energy efficiency. The paper provides a qualitative categorization of existing DCN architectures into switch-centric and server-centric topologies as well as their design technologies. Energy efficiency in data centers is discussed in details with survey of existing techniques in energy savings, green data centers and renewable energy approaches. Finally, we outline potential future research directions in DCNs

    (EMC)-M-3: Improving Energy Efficiency via Elastic Multi-Controller SDN in Data Center Networks

    Get PDF
    Energy consumed by network constitutes a significant portion of the total power budget in modern data centers. Thus, it is critical to understand the energy consumption and improve the power efficiency of data center networks (DCNs). In doing so, one straightforward and effective way is to make the size of DCNs elastic along with traffic demands, i.e., turning off unnecessary network components to reduce the energy consumption. Today, software defined networking (SDN), as one of the most promising solutions for data center management, provides a paradigm to elastically control the resources of DCNs. However, to the best of our knowledge, the features of SDN have not been fully leveraged to improve the power saving, especially for large-scale multi-controller DCNs. To address this problem, we propose (EMC)-M-3, a mechanism to improve DCN\u27s energy efficiency via the elastic multi-controller SDN. In (EMC)-M-3, the energy optimizations for both forwarding and control plane are considered by utilizing SDN\u27s fine-grained routing and dynamic control mapping. In particular, the flow network theory and the bin-packing heuristic are used to deal with the forwarding plane and control plane, respectively. Our simulation results show that E3MC can achieve more efficient power management, especially in highly structured topologies such as Fat-Tree and BCube, by saving up to 50% of network energy, at an acceptable level of computation cost

    Future PON Data Centre Networks

    Get PDF
    Significant research efforts have been devoted over the last decade to design efficient data centre networks. However, major concerns are still raised about the power consumption of data centres and its impact on global warming in the first place and on the electricity bill of data centres in the second place. Passive Optical Network (PON) technology with its proven performance in residential access networks can provide energy efficient, high capacity, low cost, scalable, and highly elastic solutions to support connectivity inside modern data centres. Here, we focus on introducing PONs in the architecture of data centres to resolve many issues in current data centre designs such as high cost and high power consumption resulting from the large number of access and aggregation switches needed to interconnect hundreds of thousands of servers. PONs can also overcome the problems of switch oversubscription and unbalanced traffic in data centres where PON architectures and protocols have historically been optimised to deal with these problems and handle bursty traffic efficiently. In this thesis, five novel PON data centre designs are proposed and compared to facilitate intra and inter rack communications. In addition to maximising the use of only passive optical devices, other challenges have to be addressed by these designs including off-loading the inter-rack traffic from the Optical Line Terminal (OLT) switch to avoid undesired power consumption and delays, facilitating multi-path routing, and reducing or eliminating the need for expensive tuneable lasers. The Scalability of the proposed architectures in terms of efficiently accommodating hundreds of thousands of servers is discussed. CAPEX and energy consumption of the proposed architectures are also investigated and savings compared to conventional architectures, such as the Fat-Tree and BCube, are demonstrated. The Routing and Wavelength Assignment (RWA) in intra and inter rack communication and the resource provisioning needed to cater for different applications that can be hosted in data centre are optimised using Mixed Integer Linear Programming (MILP) models to minimise the PON designs power consumption. Furthermore, real-time energy-efficient routing and resource provisioning algorithms are developed. In addition to optimising the power consumption, delay is also considered for the delay sensitive applications that can be hosted in the proposed data centre architectures. To further reduce power consumption and overcome issues related to link oversubscription and multi-path routing, Software Defined Network (SDN) based design is proposed
    corecore