58,205 research outputs found

    Bidimensional Cross-Cloud Application Management with TOSCA and Brooklyn (summary)

    Get PDF
    The diversity in the way different cloud providers offer their services, give their SLAs, present their QoS, support different technologies, etc., complicates the portability and interoperability of cloud applications, and favors vendor lock-in. Standards like TOSCA, and tools supporting them, have come to help in the provider-independent description of cloud applications. After the variety of proposed cross-cloud application management tools, we propose going one step further in the unification of cloud services with a deployment tool in which IaaS and PaaS services are integrated into a unified interface. We provide support for applications whose components are to be deployed on different providers, indistinctly using IaaS and PaaS services. The TOSCA standard is used to define a portable model describing the topology of the cloud applications and the required resources in an agnostic, and providers- and resources-independent way. We include in this paper some highlights on our implementation on Apache Brooklyn and present a non-trivial example that illustrates our approach. Resumen del artículo publicado en: Jose Carrasco, Javier Cubo, Francisco Durán, Ernesto Pimentel. Bidimensional Cross-Cloud Application Management with TOSCA and Brooklyn, 9th IEEE International Conference on Cloud Computing (CLOUD 2016), San Francisco, (EEUU). IEEE Computer Society, 2016.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Towards a Model-Based Serverless Platform for the Cloud-Edge-IoT Continuum

    Get PDF
    One of the most prominent implementations of the serverless programming model is Function-as-a-Service (FaaS). Using FaaS, application developers provide source code of serverless functions, typically describing only parts of a larger application, and define triggers for executing these functions on infrastructure components managed by the FaaS provider. There are still challenges that hinder the wider adoption of the FaaS model across the whole Cloud-Edge-IoT continuum. These include the high heterogeneity of the Edge and IoT infrastructure, vendor lock-in, the need to deploy and adapt serverless functions as well as their supporting services and software stacks into their cyber-physical execution environment. As a first step towards addressing these challenges, we introduce the SERVERLEss4I0T platform for the design, deployment, and maintenance of applications over the Cloud-Edge-IoT continuum. In particular, our platform enables the specification and deployment of serverless functions on Cloud and Edge resources, as well as the deployment of their supporting services and software stacks over the whole Cloud-Edge-IoT continuum.acceptedVersio

    The Virtual Machine (VM) Scaler: An Infrastructure Manager Supporting Environmental Modeling on IaaS Clouds

    Get PDF
    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific modeling as-a-service requires dynamic scaling of server infrastructure to adapt to changing user workloads. This paper presents the Virtual Machine (VM) Scaler, an autonomic resource manager for IaaS Clouds. We have developed VM-Scaler, a REST/JSON-based web services application which supports infrastructure provisioning and management to support scientific modeling for the Cloud Services Innovation Platform (CSIP) [Lloyd et al. 2012]. VM-Scaler harnesses the Amazon Elastic Compute Cloud (EC2) application programming interface to support model- service scalability, cloud management, and infrastructure configuration for supporting modeling workloads. VM-Scaler provides cloud control while abstracting the underlying IaaS cloud from the end user. VM-Scaler is extensible to support any EC2 compatible cloud and currently supports the Amazon public cloud and Eucalyptus private clouds versions 3.1 and 3.3. VM-Scaler provides a platform to improve scientific model deployment by supporting experimentation with: hot spot detection schemes, VM management and placement approaches, and model job scheduling/proxy services

    The Virtual Machine (VM) Scaler: An Infrastructure Manager Supporting Environmental Modeling on IaaS Clouds

    Get PDF
    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific modeling as-a-service requires dynamic scaling of server infrastructure to adapt to changing user workloads. This paper presents the Virtual Machine (VM) Scaler, an autonomic resource manager for IaaS Clouds. We have developed VM-Scaler, a REST/JSON-based web services application which supports infrastructure provisioning and management to support scientific modeling for the Cloud Services Innovation Platform (CSIP) [Lloyd et al. 2012]. VM-Scaler harnesses the Amazon Elastic Compute Cloud (EC2) application programming interface to support model- service scalability, cloud management, and infrastructure configuration for supporting modeling workloads. VM-Scaler provides cloud control while abstracting the underlying IaaS cloud from the end user. VM-Scaler is extensible to support any EC2 compatible cloud and currently supports the Amazon public cloud and Eucalyptus private clouds versions 3.1 and 3.3. VM-Scaler provides a platform to improve scientific model deployment by supporting experimentation with: hot spot detection schemes, VM management and placement approaches, and model job scheduling/proxy services

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    Cloud Configuration Modelling: a Literature Review from an Application Integration Deployment Perspective

    Get PDF
    Enterprise Application Integration has played an important role in providing methodologies, techniques and tools to develop integration solutions, aiming at reusing current applications and supporting the new demands that arise from the evolution of business processes in companies. Cloud-computing is part of a new reality in which companies have at their disposal a high capacity IT infrastructure at a low-cost, in which integration solutions can be deployed and run. The charging model adopted by cloud-computing providers is based on the amount of computing resources consumed by clients. Such demand of resources can be computed either from the implemented integration solution, or from the conceptual model that describes it. It is desirable that cloud-computing providers supply detailed conceptual models describing the variability of services and restrictions between them. However, this is not the case and providers do not supply the conceptual models of their services. The conceptual model of services is the basis to develop a process and provide supporting tools for the decision-making on the deployment of integration solutions to the cloud. In this paper, we review the literature on cloud configuration modelling, and compare current proposals based on a comparison framework that we have developed

    Towards Supporting the Extended DevOps Approach through Multi-cloud Architectural Patterns for Design and Pre-deployment - A Tool Supported Approach

    Get PDF
    Recently the world of Cloud Computing is witnessing two major trends: Multi-cloud applications pushed by the increasing diversity of Cloud services leading to hybrid infrastructures and the DevOps paradigm, promising increased trust, faster software releases, and the ability to solve critical issues quickly (Steinborn, 2018). This paper presents a solution for merging and adapting both trends so that the benefits for software developers and operators are multiplied. The authors describe a tool-supported approach to extend the DevOps philosophy with the objective of supporting the design and pre-deployment of multi-cloud software applications. The paper begins with the presentation of the theoretical concepts, the proceeds with the description of the developed tools and the discussion of the validation performed with a sandbox application.The project leading to this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731533
    • …
    corecore