1,886 research outputs found

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    An Innovative Workspace for The Cherenkov Telescope Array

    Get PDF
    The Cherenkov Telescope Array (CTA) is an initiative to build the next generation, ground-based gamma-ray observatories. We present a prototype workspace developed at INAF that aims at providing innovative solutions for the CTA community. The workspace leverages open source technologies providing web access to a set of tools widely used by the CTA community. Two different user interaction models, connected to an authentication and authorization infrastructure, have been implemented in this workspace. The first one is a workflow management system accessed via a science gateway (based on the Liferay platform) and the second one is an interactive virtual desktop environment. The integrated workflow system allows to run applications used in astronomy and physics researches into distributed computing infrastructures (ranging from clusters to grids and clouds). The interactive desktop environment allows to use many software packages without any installation on local desktops exploiting their native graphical user interfaces. The science gateway and the interactive desktop environment are connected to the authentication and authorization infrastructure composed by a Shibboleth identity provider and a Grouper authorization solution. The Grouper released attributes are consumed by the science gateway to authorize the access to specific web resources and the role management mechanism in Liferay provides the attribute-role mapping

    A DevOps approach to integration of software components in an EU research project

    Get PDF
    We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot

    Performance analysis of multi-institutional data sharing in the Clouds4Coordination system

    Get PDF
    Cloud computing is used extensively in Architecture/ Engineering/ Construction projects for storing data and running simulations on building models (e.g. energy efficiency/environmental impact). With the emergence of multi-Clouds it has become possible to link such systems and create a distributed cloud environment. A multi-Cloud environment enables each organisation involved in a collaborative project to maintain its own computational infrastructure/ system (with the associated data), and not have to migrate to a single cloud environment. Such infrastructure becomes efficacious when multiple individuals and organisations work collaboratively, enabling each individual/ organisation to select a computational infrastructure that most closely matches its requirements. We describe the “Clouds-for-Coordination” system, and provide a use case to demonstrate how such a system can be used in practice. A performance analysis is carried out to demonstrate how effective such a multi-Cloud system can be, reporting “aggregated-time-to-complete” metric over a number of different scenarios
    • …
    corecore