7,896 research outputs found

    End-to-End Automation in Cloud Infrastructure Provisioning

    Get PDF
    Infrastructure provisioning in the cloud can be time-consuming and error-prone due to the manual process of building scripts. Configuration Management Tools (CMT) such as Ansible, Puppet or Chef use scripts to orchestrate the infrastructure provisioning and its configuration in the cloud. Although CMTs have a high level of automation in the infrastructure provisioning still remains a challenge to automate the iterative development process in the cloud. Infrastructure as Code is a process where the infrastructure is automatically built, managed, and provisioned by scripts. However, there are several infrastructure provisioning tools and scripting languages that need to be used coherently. In previous work, we have introduced the ARGON modelling tool with the purpose of abstracting the complexity of working with different DevOps tools through a DSL. In this work, we present an end-to- end automation for a toolchain for infrastructure provisioning in the cloud based on DevOps community tools and ARGON

    The Making of Cloud Applications An Empirical Study on Software Development for the Cloud

    Full text link
    Cloud computing is gaining more and more traction as a deployment and provisioning model for software. While a large body of research already covers how to optimally operate a cloud system, we still lack insights into how professional software engineers actually use clouds, and how the cloud impacts development practices. This paper reports on the first systematic study on how software developers build applications in the cloud. We conducted a mixed-method study, consisting of qualitative interviews of 25 professional developers and a quantitative survey with 294 responses. Our results show that adopting the cloud has a profound impact throughout the software development process, as well as on how developers utilize tools and data in their daily work. Among other things, we found that (1) developers need better means to anticipate runtime problems and rigorously define metrics for improved fault localization and (2) the cloud offers an abundance of operational data, however, developers still often rely on their experience and intuition rather than utilizing metrics. From our findings, we extracted a set of guidelines for cloud development and identified challenges for researchers and tool vendors

    Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking

    Full text link
    To optimally deploy their applications, users of Infrastructure-as-a-Service clouds are required to evaluate the costs and performance of different combinations of cloud configurations to find out which combination provides the best service level for their specific application. Unfortunately, benchmarking cloud services is cumbersome and error-prone. In this paper, we propose an architecture and concrete implementation of a cloud benchmarking Web service, which fosters the definition of reusable and representative benchmarks. In distinction to existing work, our system is based on the notion of Infrastructure-as-Code, which is a state of the art concept to define IT infrastructure in a reproducible, well-defined, and testable way. We demonstrate our system based on an illustrative case study, in which we measure and compare the disk IO speeds of different instance and storage types in Amazon EC2

    Automation of the Continuous Integration (CI) - Continuous Delivery/Deployment (CD) Software Development

    Get PDF
    Continuous Integration (CI) is a practice in software development where developers periodically merge code changes in a central shared repository, after which automatic versions and tests are executed. CI entails an automation component (the target of this project) and a cultural one, as developers have to learn to integrate code periodically. The main goal of CI is to reduce the time to feedback over the software integration process, allowing to locate and fix bugs more easily and quickly, thus enhancing it quality while reducing the time to validate and publish new soIn traditional software development, where teams of developers worked on the same project in isolation, often led to problems integrating the resulting code. Due to this isolation, the project was not deliverable until the integration of all its parts, which was tedious and generated errors. The Continuous Integration (CI ) emerged as a practice to solve the problems of traditional methodology, with the aim of improving the quality of the code. This thesis sets out what is it and how Continuous Integration is achieved, the principles that makes it as effective as possible and the processes that follow as a consequence, to thus introduce the context of its objective: the creation of a system that automates the start-up and set-up of an environment to be able to apply the methodology of continuous integration

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Towards an open cloud marketplace: vision and first steps

    Full text link
    As one of the most promising, emerging concepts in Information Technology (IT), cloud computing is transforming how IT is consumed and managed; yielding improved cost efficiencies, and delivering flexible, on-demand scalability by reducing computing infrastructures, platforms, and services to commodities acquired and paid-for on-demand through a set of cloud providers. Today, the transition of cloud computing from a subject of research and innovation to a critical infrastructure is proceeding at an incredibly fast pace. A potentially dangerous consequence of this speedy transition to practice is the premature adoption, and ossification, of the models, technologies, and standards underlying this critical infrastructure. This state of affairs is exacerbated by the fact that innovative research on production-scale platforms is becoming the purview of a small number of public cloud providers. Specifically, the academic research communities are effectively excluded from the opportunity to contribute meaningfully to the evolution not to mention innovation and healthy mutation of cloud computing technologies. As the dependence on our society and economy on cloud computing increases, so does the realization that the academic research community cannot be shut out from contributing to the design and evolution of this critical infrastructure. In this article we provide an alternative vision that of an Open Cloud eXchange (OCX) a public cloud marketplace, where many stakeholders, rather than just a single cloud provider, participate in implementing and operating the cloud, thus creating an ecosystem that will bring the innovation of a broader community to bear on a much healthier and more efficient cloud marketplace
    • …
    corecore