295 research outputs found

    Reducing the operational cost of cloud data centers through renewable energy

    Get PDF
    The success of cloud computing services has led to big computing infrastructures that are complex to manage and very costly to operate. In particular, power supply dominates the operational costs of big infrastructures, and several solutions have to be put in place to alleviate these operational costs and make the whole infrastructure more sustainable. In this paper, we investigate the case of a complex infrastructure composed of data centers (DCs) located in different geographical areas in which renewable energy generators are installed, co-located with the data centers, to reduce the amount of energy that must be purchased by the power grid. Since renewable energy generators are intermittent, the load management strategies of the infrastructure have to be adapted to the intermittent nature of the sources. In particular, we consider EcoMultiCloud, a load management strategy already proposed in the literature for multi-objective load management strategies, and we adapt it to the presence of renewable energy sources. Hence, cost reduction is achieved in the load allocation process, when virtual machines (VMs) are assigned to a data center of the considered infrastructure, by considering both energy cost variations and the presence of renewable energy production. Performance is analyzed for a specific infrastructure composed of four data centers. Results show that, despite being intermittent and highly variable, renewable energy can be effectively exploited in geographical data centers when a smart load allocation strategy is implemented. In addition, the results confirm that EcoMultiCloud is very flexible and is suited to the considered scenario

    Energy Efficient Tapered Data Networks for Big Data Processing in IP/WDM Networks

    Get PDF
    Classically the data produced by Big Data applications is transferred through the access and core networks to be processed in data centers where the resulting data is stored. In this work we investigate improving the energy efficiency of transporting Big Data by processing the data in processing nodes of limited processing and storage capacity along its journey through the core network to the data center. The amount of data transported over the core network will be significantly reduced each time the data is processed therefore we refer to such a network as an Energy Efficient Tapered Data Network. The results of a Mixed Integer linear Programming (MILP), developed to optimize the processing of Big Data in the Energy Efficient Tapered Data Networks, show significant reduction in network power consumption up to 76%

    A survey on architectures and energy efficiency in Data Center Networks

    Get PDF
    Data Center Networks (DCNs) are attracting growing interest from both academia and industry to keep pace with the exponential growth in cloud computing and enterprise networks. Modern DCNs are facing two main challenges of scalability and cost-effectiveness. The architecture of a DCN directly impacts on its scalability, while its cost is largely driven by its power consumption. In this paper, we conduct a detailed survey of the most recent advances and research activities in DCNs, with a special focus on the architectural evolution of DCNs and their energy efficiency. The paper provides a qualitative categorization of existing DCN architectures into switch-centric and server-centric topologies as well as their design technologies. Energy efficiency in data centers is discussed in details with survey of existing techniques in energy savings, green data centers and renewable energy approaches. Finally, we outline potential future research directions in DCNs

    Cost Optimization and Load Balancing of Intra and Inter Data Center Networks to Facilitate Cloud Services

    Get PDF
    Title from PDF of title page viewed January 3, 2019Dissertation advisor: Deep MedhiVitaIncludes bibliographical references (pages 127-137}Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2018For cloud enterprise customers that require services on demand, data centers (DC) must allocate and partition data center resources in a dynamic fashion. We consider the problem of allocating data center resources for cloud enterprise customers who require guaranteed services on demand. In particular, a request from an enterprise customer is mapped to a virtual network (VN) class that is allocated both bandwidth and compute resources by connecting it from an entry point of a data center to one or more hosts while there are multiple geographically distributed data centers to choose from. We take a dynamic trafļ¬c engineering approach over multiple time periods in which an energy aware resource reservation model is solved at each review point. In this dissertation, at ļ¬rst for the energy-aware resource reservation problem, we present a mixed-integer linear programming (MILP) formulation (for small-scale problems) and a heuristic approach (for large-scale problems). Our heuristic is fast for solving large-scale problems where the MILP problem becomes difļ¬cult to solve. Through a comprehensive set of studies, we found that a VN class with a low resource requirement has a low blocking even in heavy trafļ¬c, while the VN class with a high resource requirement faces a high service denial. Furthermore, the VN class having randomly distributed resource requirement has a high provisioning cost and blocking compared to the VN class having the same resource requirement for each request although the average resource requirement is same for both these VN classes. We also observe that our approach reduces the maximum energy consumption by about one-sixth at the low arrival rate to by about one-third at the highest arrival rate which also depends on how many different CPU frequency levels a server can run at. Allocation of resources in data centers needs to be done in a dynamic fashion for cloud enterprise customers who require virtualized reservation-oriented services on demand. Due to the spatial diversity of data centers, the cost of using different DCs also varies. In this dissertation, we then propose an allocation scheme to balance the load among these DCs with different cost to minimize the total provisioning cost in a dynamic environment while ensuring that the service level agreements (SLAs) are met. Compared to a benchmark scheme (where all requests are ļ¬rst sent to the cheapest data center), our scheme can decrease the proportional utilization from 24% (for heavy load) to 30% (for normal load) and achieve a signiļ¬cant balance in the cost incurred by individual DCs. Our scheme can also achieve 7.5% reduction in total provisioning cost under certain service level agreement (SLA) in exchange of low increment in blocking. Furthermore, we tested our scheme on 5 DCs to show that our allocation schemes follows the weighted cost proportionally. With the increasing dependency of cloud-based services, data centers have be come a popular platform to satisfy customersā€™ requests. Many large network providers now have their own geographically distributed DCs for cloud services, or have partner ships with third party DC providers to route customersā€™ demand. When end customersā€™ re quests arrive at a Point-of-Presence (PoP) of a large Internet Service Provider, the provider having DCs in multiple geo-locations needs to decide which DC should serve the request depending on the geo-distance, cost of resources in that DC, availability of the requested resource at that DC, and congestion in the path from the customersā€™ location to that DC. Therefore, an optimal connectivity scheme from the ingress PoP to egress DC is required among the PoPs and DCs to minimize the cost of establishing paths between a PoP and a DC while ensuring load balancing in both the link level and DC level. Considering these, we also present a novel mix-integer linear programming (MILP) model for this problem. We show the efļ¬cacy of our model through various performance metrics such as average and maximum link utilization, and average number of links used per path.Introduction -- Literature review -- Model and heuristic for intra DC cost optimization -- Simulation setup and result analysis for intra DC cost optimization -- Load balancing in geo-distributed data centers -- Optimal connectivity between inter DC networks -- Conclusion and future research -- Appendix A. Intra DC optimization model in AMPL -- Appendix B. Optimal connectivity to inter DC network model in AMP
    • ā€¦
    corecore