52 research outputs found

    Observing the clouds : a survey and taxonomy of cloud monitoring

    Get PDF
    This research was supported by a Royal Society Industry Fellowship and an Amazon Web Services (AWS) grant. Date of Acceptance: 10/12/2014Monitoring is an important aspect of designing and maintaining large-scale systems. Cloud computing presents a unique set of challenges to monitoring including: on-demand infrastructure, unprecedented scalability, rapid elasticity and performance uncertainty. There are a wide range of monitoring tools originating from cluster and high-performance computing, grid computing and enterprise computing, as well as a series of newer bespoke tools, which have been designed exclusively for cloud monitoring. These tools express a number of common elements and designs, which address the demands of cloud monitoring to various degrees. This paper performs an exhaustive survey of contemporary monitoring tools from which we derive a taxonomy, which examines how effectively existing tools and designs meet the challenges of cloud monitoring. We conclude by examining the socio-technical aspects of monitoring, and investigate the engineering challenges and practices behind implementing monitoring strategies for cloud computing.Publisher PDFPeer reviewe

    Deployment and Operation of Complex Software in Heterogeneous Execution Environments

    Get PDF
    This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Proceedings of the 5th bwHPC Symposium

    Get PDF
    In modern science, the demand for more powerful and integrated research infrastructures is growing constantly to address computational challenges in data analysis, modeling and simulation. The bwHPC initiative, founded by the Ministry of Science, Research and the Arts and the universities in Baden-Württemberg, is a state-wide federated approach aimed at assisting scientists with mastering these challenges. At the 5th bwHPC Symposium in September 2018, scientific users, technical operators and government representatives came together for two days at the University of Freiburg. The symposium provided an opportunity to present scientific results that were obtained with the help of bwHPC resources. Additionally, the symposium served as a platform for discussing and exchanging ideas concerning the use of these large scientific infrastructures as well as its further development

    DevOps Continuous Integration: Moving Germany’s Federal Employment Agency Test System into Embedded In-Memory Technology

    Get PDF
    This paper describes the development of a continuous integration database test architecture for a highly important and large software application in the public sector in Germany. We apply action design research and draw from two emerging areas of research – DevOps continuous integration practices and in-memory database development – to define the problem, design, build and implement the solution, analyze challenges encountered, and make adjustments. The result is the transformation of a large test environment originally based on Oracle databases into a flexible and fast embedded in-memory architecture. The main challenges involved overcoming the differences between the SQL specifications supported by the development and production systems and optimizing the test runtime performance. The paper contributes to theory and practice by presenting one of the first studies showing a real-world implementation of a successful database test architecture that enables continuous integration, and identifying technical design principles for database test architectures in general

    Contribución a la estimulación del uso de soluciones Cloud Computing: Diseño de un intermediador de servicios Cloud para fomentar el uso de ecosistemas distribuidos digitales confiables, interoperables y de acuerdo a la legalidad. Aplicación en entornos multi-cloud.

    Get PDF
    184 p.El objetivo del trabajo de investigación presentado en esta tesis es facilitar a los desarrolladores y operadores de aplicaciones desplegadas en múltiples Nubes el descubrimiento y la gestión de los diferentes servicios de Computación, soportando su reutilización y combinación, para generar una red de servicios interoperables, que cumplen con las leyes y cuyos acuerdos de nivel de servicio pueden ser evaluados de manera continua. Una de las contribuciones de esta tesis es el diseño y desarrollo de un bróker de servicios de Computación llamado ACSmI (Advanced Cloud Services meta-Intermediator). ACSmI permite evaluar el cumplimiento de los acuerdos de nivel de servicio incluyendo la legislación. ACSmI también proporciona una capa de abstracción intermedia para los servicios de Computación donde los desarrolladores pueden acceder fácilmente a un catálogo de servicios acreditados y compatibles con los requisitos no funcionales establecidos.Además, este trabajo de investigación propone la caracterización de las aplicaciones nativas multiNube y el concepto de "DevOps extendido" especialmente pensado para este tipo de aplicaciones. El concepto "DevOps extendido" pretende resolver algunos de los problemas actuales del diseño, desarrollo, implementación y adaptación de aplicaciones multiNube, proporcionando un enfoque DevOps novedoso y extendido para la adaptación de las prácticas actuales de DevOps al paradigma multiNube

    Infrastructure as Code Strategies and Benefits in Cloud Computing

    Get PDF
    Hybrid and multicloud infrastructure implementation without automation and versioning strategy can negatively impact organizations’ productivity. Organization leaders must ensure that infrastructures are implemented using the infrastructure as code (IaC) strategy because implementation solutions, including automated and DevOps procedures, provide assets for repeatable infrastructure implementation use cases. Grounded in the disruptive innovation theory, the purpose of this qualitative pragmatic inquiry study was to explore strategies solution architects use to implement IaC architecture using repeatable assets with DevOps procedures in cloud computing. The participants were seven solution architects in the information technology (IT) industry within the United States who have successfully implemented IaC in hybrid and multicloud within the past 3 years in cloud computing with DevOps procedures. Data were collected using semi-structured interviews, a focus group, and IT industry documents. The data analysis processes were analyzed using thematic analysis Eight themes emerged: IaC benefits, IaC cloud computing models, IaC cloud service providers, IaC configuration best practices, IaC DevOps practices, IaC implementation tools, IaC Kubernetes platforms, and IT infrastructure design practices. A specific recommendation is for organizational leaders to implement the IaC approach as it offers sustaining and disruptive innovation benefits, in addition, space agencies such as the National Aeronautics and Space Administration (NASA), European Space Agency (ESA), et al., could use this study in their mission infrastructures. The implications for positive social change include the potential to make the user application offerings affordable as it supports IT innovation in hybrid and multicloud globally
    corecore