6 research outputs found

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    An Elasticity-aware Governance Platform for Cloud Service Delivery

    Get PDF
    In cloud service provisioning scenarios with a changing demand from consumers, it is appealing for cloud providers to leverage only a limited amount of the virtualized resources required to provide the service. However, it is not easy to determine how much resources are required to satisfy consumers expectations in terms of Quality of Service (QoS). Some existing frameworks provide mechanisms to adapt the required cloud resources in the service delivery, also called an elastic service, but only for consumers with the same QoS expectations. The problem arises when the service provider must deal with several consumers, each demanding a different QoS for the service. In such an scenario, cloud resources provisioning must deal with trade-offs between different QoS, while fulfilling these QoS, within the same service deployment. In this paper we propose an elasticity-aware governance platform for cloud service delivery that reacts to the dynamic service load introduced by consumers demand. Such a reaction consists of provisioning the required amount of cloud resources to satisfy the different QoS that is offered to the consumers by means of several service level agreements. The proposed platform aims to keep under control the QoS experienced by multiple service consumers while maintaining a controlled cost.Junta de Andalucía P12--TIC--1867Ministerio de Economía y Competitividad TIN2012-32273Agencia Estatal de Investigación TIN2014-53986-RED

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Establishing trust for secure elasticity in edge-cloud microservices

    Get PDF
    Platform services are increasingly becoming distributed to improve the availability and latency of Industrial Internet of Things (IIoT) applications. Modern infrastructure services such as Kubernetes have enabled a seamless deployment of these platform services across the distributed edge and cloud subsystems. These infrastructure services support dynamic addition and removal of resources, and thus, they enable the elasticity of the edge-cloud platform services. However, these infrastructure services currently do not have a high-level view of platform services and make elasticity decisions based on low-level configurations provided by the stakeholder. This thesis aims to support trust establishment in the elasticity operations of these edge-cloud platform services. We present the ZETA framework that introduces Zero Trust Architecture (ZTA) secure design paradigm into these elasticity operations. ZETA ensures trusted elasticity of platform services via contextual Gaussian Process Regression (GPR) based trust computation from the ``observed'' and ``service'' knowledge. Moreover, it supports elasticity delegation capabilities through a token-based platform-agnostic interaction model. Finally, ZETA allows the stakeholder to provide custom trust policies, fine-tune the trust algorithm and even extend it. The evaluation of the ZETA framework on multiple real-world scenarios demonstrates its ability to support zero-trust elasticity in variety of operations. Moreover, the encouraging results from the performance evaluation exhibit a low resource utilization and delineate precise resource requirements of ZETA provisioning

    Modern Systems for Large-scale Genomics Data Analysis in the Cloud

    Get PDF
    Genomics researchers increasingly turn to cloud computing as a means of accomplishing large-scale analyses efficiently and cost-effectively. Successful operation in the cloud requires careful instrumentation and management to avoid common pitfalls, such as resource bottlenecks and low utilisation that can both drive up costs and extend the timeline of a scientific project. We developed the Butler framework for large-scale scientific workflow management in the cloud to meet these challenges. The cornerstones of Butler design are: ability to support multiple clouds, declarative infrastructure configuration management, scalable, fault-tolerant operation, comprehensive resource monitoring, and automated error detection and recovery. Butler relies on industry-strength open-source components in order to deliver a framework that is robust and scalable to thousands of compute cores and millions of workflow executions. Butler’s error detection and self-healing capabilities are unique among scientific workflow frameworks and ensure that analyses are carried out with minimal human intervention. Butler has been used to analyse over 725TB of DNA sequencing data on the cloud, using 1500 CPU cores, and 6TB of RAM, delivering results with 43\% increased efficiency compared to other tools. The flexible design of this framework allows easy adoption within other fields of Life Sciences and ensures that it will scale together with the demand for scientific analysis in the cloud for years to come. Because many bioinformatics tools have been developed in the context of small sample sizes they often struggle to keep up with the demands for large-scale data processing required for modern research and clinical sequencing projects due to the limitations in their design. The Rheos software system is designed specifically with these large data sets in mind. Utilising the elastic compute capacity of modern academic and commercial clouds, Rheos takes a service-oriented containerised approach to the implementation of modern bioinformatics algorithms, which allows the software to achieve the scalability and ease-of-use required to succeed under increased operational load of massive data sets generated by projects like International Cancer Genomics Consortium (ICGC) Argo and the All of Us initiative. Rheos algorithms are based on an innovative stream-based approach for processing genomic data, which enables Rheos to make faster decisions about the presence of genomic mutations that drive diseases such as cancer, thereby improving the tools' efficacy and relevance to clinical sequencing applications. Our testing of the novel germline Single Nucleotide Polymorphism (SNP) and deletion variant calling algorithms developed within Rheos indicates that Rheos achieves ~98\% accuracy in SNP calling and ~85\% accuracy in deletion calling, which is comparable with other leading tools such as the Genome Analysis Toolkit (GATK), freebayes, and Delly. The two frameworks that we developed provide important contributions to solve the ever-growing need for large scale genomic data analysis on the cloud, by enabling more effective use of existing tools, in the case of Butler, and providing a new, more dynamic and real-time approach to genomic analysis, in the case of Rheos
    corecore