3,396 research outputs found

    Integrated Green Cloud Computing Architecture

    Full text link
    Arbitrary usage of cloud computing, either private or public, can lead to uneconomical energy consumption in data processing, storage and communication. Hence, green cloud computing solutions aim not only to save energy but also reduce operational costs and carbon footprints on the environment. In this paper, an Integrated Green Cloud Architecture (IGCA) is proposed that comprises of a client-oriented Green Cloud Middleware to assist managers in better overseeing and configuring their overall access to cloud services in the greenest or most energy-efficient way. Decision making, whether to use local machine processing, private or public clouds, is smartly handled by the middleware using predefined system specifications such as service level agreement (SLA), Quality of service (QoS), equipment specifications and job description provided by IT department. Analytical model is used to show the feasibility to achieve efficient energy consumption while choosing between local, private and public Cloud service provider (CSP).Comment: 6 pages, International Conference on Advanced Computer Science Applications and Technologies, ACSAT 201

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi

    Virtual distributed environments for systems with time requirements

    Get PDF
    Virtualization is widely propagating technology that is used to run multiple virtual machines on the same computational unit by means of a piece of firmware, hardware or software called a hypervisor. Despite having been used since the 60as, the current indisputable need for fast reliable communication may put this technology to question. This project analyzes the amount of impact the virtualization has on the transmission times. In the first part, the Xen hypervisor, configured with different virtual environments, simulating complex scenarios, will be evaluated to determine the size of the impact. As a bridge between the multiple virtual machines, middleware Ice, will be used. Furthermore lower in the scale, for embedded systems, the XtratuM hypervisor was designed to support real-time systems. The second part is dedicated to evaluating whether the communication maintains the real time property of these systems. Bare boned virtualization will be implemented in this second part of the project.Ingeniería en Tecnologías de Telecomunicació
    corecore