884 research outputs found

    EQUAL: Energy and QoS Aware Resource Allocation Approach for Clouds

    Get PDF
    The popularity of cloud computing is increasing by leaps and bounds. To cope with resource demands of increasing number of cloud users, the cloud market players establish large sized data centers. The huge energy consumption by the data centers and liability of fulfilling Quality of Service (QoS) requirements of the end users have made resource allocation a challenging task. In this paper, energy and QoS aware resource allocation approach which employs Antlion optimization for allocation of resources to virtual machines (VMs) is proposed. It can operate in three modes, namely power aware, performance aware, and balanced mode. The proposed approach enhances energy efficiency of the cloud infrastructure by improving the utilization of resources while fulfilling QoS requirements of the end users. The proposed approach is implemented in CloudSim. The simulation results have shown improvement in QoS and energy efficiency of the cloud

    Multi-dimensional optimization for cloud based multi-tier applications

    Get PDF
    Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.Ph.D.Committee Chair: Pu, Calton; Committee Member: Liu, Ling; Committee Member: Liu, Xue; Committee Member: Schlichting, Richard; Committee Member: Schwan, Karsten; Committee Member: Yalamanchili, Sudhaka

    Neural Adaptive Admission Control Framework: SLA-driven action termination for real-time application service management

    Get PDF
    Although most modern cloud-based enterprise systems, or operating systems, do not commonly allow configurable/automatic termination of processes, tasks or actions, it is common practice for systems administrators to manually terminate, or stop, tasks or actions at any level of the system. The paper investigates the potential of automatic adaptive control with action termination as a method for adapting the system to more appropriate conditions in environments with established goals for both system’s performance and economics. A machine-learning driven control mechanism, employing neural networks, is derived and applied within data-intensive systems. Control policies that have been designed following this approach are evaluated under different load patterns and service level requirements. The experimental results demonstrate performance characteristics and benefits as well as implications of termination control when applied to different action types with distinct run-time characteristics. An automatic termination approach may be eminently suitable for systems with harsh execution time Service Level Agreements, or systems running under conditions of hard pressure on power supply or other constraints. The proposed control mechanisms can be combined with other available toolkits to support deployment of autonomous controllers in high-dimensional enterprise information systems

    A survey on energy efficiency in information systems

    Get PDF
    Concerns about energy and sustainability are growing everyday involving a wide range of fields. Even Information Systems (ISs) are being influenced by the issue of reducing pollution and energy consumption and new fields are rising dealing with this topic. One of these fields is Green Information Technology (IT), which deals with energy efficiency with a focus on IT. Researchers have faced this problem according to several points of view. The purpose of this paper is to understand the trends and the future development of Green IT by analyzing the state-of-the-art and classifying existing approaches to understand which are the components that have an impact on energy efficiency in ISs and how this impact can be reduced. At first, we explore some guidelines that can help to understand the efficiency level of an organization and of an IS. Then, we discuss measurement and estimation of energy efficiency and identify which are the components that mainly contribute to energy waste and how it is possible to improve energy efficiency, both at the hardware and at the software level

    The University of Maine Information Technology Strategic Plan

    Get PDF
    The University of Maine’s Information Technology (IT) Strategic Plan (Plan). This Plan was the culmination of a comprehensive IT assessment and planning process, which included input from over 100 University stakeholders representing students, faculty, staff and senior administration

    Taming Energy Costs of Large Enterprise Systems Through Adaptive Provisioning

    Get PDF
    One of the most pressing concerns in modern datacenter management is the rising cost of operation. Therefore, reducing variable expense, such as energy cost, has become a number one priority. However, reducing energy cost in large distributed enterprise system is an open research topic. These systems are commonly subjected to highly volatile workload processes and characterized by complex performance dependencies. This paper explicitly addresses this challenge and presents a novel approach to Taming Energy Costs of Larger Enterprise Systems (Tecless). Our adaptive provisioning methodology combines a low-level technical perspective on distributed systems with a high-level treatment of workload processes. More concretely, Tecless fuses an empirical bottleneck detection model with a statistical workload prediction model. Our methodology forecasts the system load online, which enables on-demand infrastructure adaption while continuously guaranteeing quality of service. In our analysis we show that the prediction of future workload allows adaptive provisioning with a power saving potential of up 25 percent of the total energy cost
    • …
    corecore