323 research outputs found

    Performance-oriented Cloud Provisioning: Taxonomy and Survey

    Full text link
    Cloud computing is being viewed as the technology of today and the future. Through this paradigm, the customers gain access to shared computing resources located in remote data centers that are hosted by cloud providers (CP). This technology allows for provisioning of various resources such as virtual machines (VM), physical machines, processors, memory, network, storage and software as per the needs of customers. Application providers (AP), who are customers of the CP, deploy applications on the cloud infrastructure and then these applications are used by the end-users. To meet the fluctuating application workload demands, dynamic provisioning is essential and this article provides a detailed literature survey of dynamic provisioning within cloud systems with focus on application performance. The well-known types of provisioning and the associated problems are clearly and pictorially explained and the provisioning terminology is clarified. A very detailed and general cloud provisioning classification is presented, which views provisioning from different perspectives, aiding in understanding the process inside-out. Cloud dynamic provisioning is explained by considering resources, stakeholders, techniques, technologies, algorithms, problems, goals and more.Comment: 14 pages, 3 figures, 3 table

    Cloud Servers: Resource Optimization Using Different Energy Saving Techniques

    Get PDF
    Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique

    Dynamic Resource Management in Virtualized Data Centres

    Get PDF
    In the last decade, Cloud Computing has become a disruptive force in the computing landscape, changing the way in which software is designed, deployed and used over the world. Its adoption has been substantial and it is only expected to continue growing. The growth of this new model is supported by the proliferation of large-scale data centres, built for the express purpose of hosting cloud workloads. These data centres rely on systems virtualization to host multiple workloads per physical server, thus increasing their infrastructures\u27 utilization and decreasing their power consumption. However, the owners of the cloud workloads expect their applications\u27 demand to be satisfied at all times, and placing too many workloads in one physical server can risk meeting those service expectations. These and other management goals make the task of managing a cloud-supporting data centre a complex challenge, but one that needs to be addressed. In this work, we address a few of the management challenges associated with dynamic resource management in virtualized data centres. We investigate the application of First Fit heuristics to the Virtual Machine Relocation problem (that is, the problem of migrating VMs away from stressed or overloaded hosts) and the effect that different heuristics have, as reflected in the performance metrics of the data centre. We also investigate how to pursue multiple goals in data centre management and propose a method to achieve precisely that by dynamically switching management strategies at runtime according to data centre state. In order to improve system scalability and decrease network management overhead, we propose architecting the management system as a topology-aware hierarchy of managing elements, which limits the flow of management data across the data centre. Finally, we address the challenge of managing multi-VM applications with placement constraints in data centres, while still trying to achieve high levels of resource utilization and client satisfaction
    corecore