8 research outputs found

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Adaptive Resource Allocation for Workflow Containerization on Kubernetes

    Full text link
    In a cloud-native era, the Kubernetes-based workflow engine enables workflow containerized execution through the inherent abilities of Kubernetes. However, when encountering continuous workflow requests and unexpected resource request spikes, the engine is limited to the current workflow load information for resource allocation, which lacks the agility and predictability of resource allocation, resulting in over and under-provisioning resources. This mechanism seriously hinders workflow execution efficiency and leads to high resource waste. To overcome these drawbacks, we propose an adaptive resource allocation scheme named ARAS for the Kubernetes-based workflow engines. Considering potential future workflow task requests within the current task pod's lifecycle, the ARAS uses a resource scaling strategy to allocate resources in response to high-concurrency workflow scenarios. The ARAS offers resource discovery, resource evaluation, and allocation functionalities and serves as a key component for our tailored workflow engine (KubeAdaptor). By integrating the ARAS into KubeAdaptor for workflow containerized execution, we demonstrate the practical abilities of KubeAdaptor and the advantages of our ARAS. Compared with the baseline algorithm, experimental evaluation under three distinct workflow arrival patterns shows that ARAS gains time-saving of 9.8% to 40.92% in the average total duration of all workflows, time-saving of 26.4% to 79.86% in the average duration of individual workflow, and an increase of 1% to 16% in CPU and memory resource usage rate

    Mobiilse värkvõrgu protsessihaldus

    Get PDF
    Värkvõrk, ehk Asjade Internet (Internet of Things, lüh IoT) edendab lahendusi nagu nn tark linn, kus meid igapäevaselt ümbritsevad objektid on ühendatud infosüsteemidega ja ka üksteisega. Selliseks näiteks võib olla teekatete seisukorra monitoorimissüsteem. Võrku ühendatud sõidukitelt (nt bussidelt) kogutakse videomaterjali, mida seejärel töödeldakse, et tuvastada löökauke või lume kogunemist. Tavaliselt hõlmab selline lahendus keeruka tsentraalse süsteemi ehitamist. Otsuste langetamiseks (nt milliseid sõidukeid parasjagu protsessi kaasata) vajab keskne süsteem pidevat ühendust kõigi IoT seadmetega. Seadmete hulga kasvades võib keskne lahendus aga muutuda pudelikaelaks. Selliste protsesside disaini, haldust, automatiseerimist ja seiret hõlbustavad märkimisväärselt äriprotsesside halduse (Business Process Management, lüh BPM) valdkonna standardid ja tööriistad. Paraku ei ole BPM tehnoloogiad koheselt kasutatavad uute paradigmadega nagu Udu- ja Servaarvutus, mis tuleviku värkvõrgu jaoks vajalikud on. Nende puhul liigub suur osa otsustustest ja arvutustest üksikutest andmekeskustest servavõrgu seadmetele, mis asuvad lõppkasutajatele ja IoT seadmetele lähemal. Videotöötlust võiks teostada mini-andmekeskustes, mis on paigaldatud üle linna, näiteks bussipeatustesse. Arvestades IoT seadmete üha suurenevat hulka, vähendab selline koormuse jaotamine vähendab riski, et tsentraalne andmekeskust ülekoormamist. Doktoritöö uurib, kuidas mobiilsusega seonduvaid IoT protsesse taoliselt ümber korraldada, kohanedes pidevalt muutlikule, liikuvate seadmetega täidetud servavõrgule. Nimelt on ühendused katkendlikud, mistõttu otsuste langetus ja planeerimine peavad arvestama muuhulgas mobiilseadmete liikumistrajektoore. Töö raames valminud prototüüpe testiti Android seadmetel ja simulatsioonides. Lisaks valmis tööriistakomplekt STEP-ONE, mis võimaldab teadlastel hõlpsalt simuleerida ja analüüsida taolisi probleeme erinevais realistlikes stsenaariumites nagu seda on tark linn.The Internet of Things (IoT) promotes solutions such as a smart city, where everyday objects connect with info systems and each other. One example is a road condition monitoring system, where connected vehicles, such as buses, capture video, which is then processed to detect potholes and snow build-up. Building such a solution typically involves establishing a complex centralised system. The centralised approach may become a bottleneck as the number of IoT devices keeps growing. It relies on constant connectivity to all involved devices to make decisions, such as which vehicles to involve in the process. Designing, automating, managing, and monitoring such processes can greatly be supported using the standards and software systems provided by the field of Business Process Management (BPM). However, BPM techniques are not directly applicable to new computing paradigms, such as Fog Computing and Edge Computing, on which the future of IoT relies. Here, a lot of decision-making and processing is moved from central data-centers to devices in the network edge, near the end-users and IoT sensors. For example, video could be processed in mini-datacenters deployed throughout the city, e.g., at bus stops. This load distribution reduces the risk of the ever-growing number of IoT devices overloading the data center. This thesis studies how to reorganise the process execution in this decentralised fashion, where processes must dynamically adapt to the volatile edge environment filled with moving devices. Namely, connectivity is intermittent, so decision-making and planning need to involve factors such as the movement trajectories of mobile devices. We examined this issue in simulations and with a prototype for Android smartphones. We also showcase the STEP-ONE toolset, allowing researchers to conveniently simulate and analyse these issues in different realistic scenarios, such as those in a smart city.  https://www.ester.ee/record=b552551

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    ViePEP - Vienna Platform for Elastic Processes

    No full text
    Zsfassung in dt. SrpacheIn dieser Arbeit stellen wir ein Business Process Management System (BPMS) für die Cloud vor. Dieses BPMS kann mehere hundert Geschäftsprozesse (BPs) simultan bearbeiten und aus- führen. Diese BPs werden während der Ausührung beobachtet um auf etwaige Dienstgüterver- inbarungen verstöße frühzeitig zu reagieren, mehr Resourcen anzuforder wenn nötig, oder nicht mehr gebrauchte Resourcen freizugeben.Elastische Prozesse (EPs) sind ein neuartiges Konzept aus dem Bereich von Cloud Com- puting bei dem die Vielartigkeit der Elastizität der Cloud zum Einsatz kommt. EPs zeichnen sich durch 3 Eigenschaften aus: Kosten Elastizität, Resourcen Elastizität und Qualiät Elastizi- tät. Diese Elastizitäten spielen heutzutage ebenfalls bei BPMSs in der Cloud eine große Rolle.Ein BP besteht aus mehreren einzelnen Prozessen die alle unterschiedliche Anforderungen an Resourcen haben, andere Qualiätskriterien gelten und somit auch unterschiedlich hohe Kosten anfallen. Die Anzahl der gleichzeitig auszuführenden BPs kann über die Zeit enorm variieren, und dadurch auch die Anzahl der benötigten Resourcen. Daher kann es notwendig sein, dass während der Laufzeit zusätliche Resourcen angefordert, oder nicht mehr benötigte Resourcen wieder freigegeben werden.Wir entwickelten daher ViePEP - the Vienna Platform for Elastic Processes. ViePEP ist gleichzeitig ein BPMS welches in der Lage ist, mehrere hundert BPs gleichzeitig ausführen zu können, ihre Abarbeitungen zu beobachten und etwaiigen Dienstgüterverinbarungen verstößen frühzeitig entgegen zu steueren. Weiters ist es in der Lage BPs nach ihrer Priorität zu sortieren, so dass ausgemachte Vereinbarungen über die Dienstgüter der einzelnen Services nicht gebrochen werden. Weiters ist ViePEP in der Lage gegen etwaige Resourcenknappheit frühzeitig entgegen zu wirken. Mit Hilfe eines Prediction Models, kann ViePEP die benötigten Resourcen für die nahe Zukunft vorhersagen um zusätlich benötigte Resourcen rechtzeitig anzufordern oder unnö- tige freizugeben.Mittels ausführlichen Experimenten mit unterschiedlichen Einstellungen haben wir gezeigt, dass ViePEP in der Lage ist, automatisiert BPs auszuführen, auf alle Dienstgüterve- rinbarungen zu achten und automatisch neue Resourcen anzufordern oder freizugeben.Within this thesis, we propose a novel Business Process Management System for the Cloud. It is able to process several hundred workflows simultaneously while monitoring their executions in order to counteract against potential Service Level Agreement violations through scheduling the queued workflows and acquiring additional computing resources when needed or releasing unneeded ones.Elastic Processes are a novel paradigm from the field of Cloud computing. It combines the various facets of elasticity that captures process dynamics in the Cloud. Elastic Processes are described in three ways: cost elasticity, resource elasticity and quality elasticity. Nowadays, these elasticities are also relevant for workflows in the Cloud. A workflow consists out of several individual processes, each requires a different amount of resources, follows different Quality-of- Service attributes and produces a different amount of costs. Since in today's workflows resource intensive tasks get more and more common, and the amount of workflows executed in parallel varies over time, the amount of needed resources will also diversify enormous. That is way it is necessary to acquire additional computing resources during the system's runtime, or release some whenever they are not needed anymore.Therefore, we developed ViePEP - the Vienna Platform for Elastic Processes. ViePEP is on the one hand a Business Process Management System for the Cloud, capable to manage and process the execution of several hundred workflows simultaneously. It further monitors their executions in order to identify potential Service Level Agreement violations in time. And on the other hand, ViePEP is able to counteract against the lack of needed or the excess of used computing resources. Using a prediction model, ViePEP is not just able to counteract in-time but also predict the future need of resources for the near future in order to acquire additional resources punctual or release unneeded resources. By evaluating ViePEP with different config- ured experiments we have shown, that ViePEP is able to automatically process workflows, while taking care of Service Level Agreements through scheduling their executions and acquiring or releasing computing resources.9

    S.: Realizing Elastic Processes with ViePEP

    No full text
    Abstract. Online business processes are faced with varying workloads that require agile deployment of computing resources. Elastic processes leverage the on-demand provisioning ability of Cloud Computing to allocate and de-allocate resources as required to deal with shifting demand. To realize elastic processes, it is necessary to track the current and future system landscape, monitor the process execution, reason about how to utilize resources in an optimal way, and carry out the necessary actions (e.g., start/stop servers, move services). Traditional Business Process Management Systems (BPMS) do not consider such needs of elastic process. Within this demo, we present ViePEP, a research BPMS able to execute and monitor resource-, costand QoS-elastic, service-based workflows and optimize the overall system landscape based on a reasoning of the non-functional requirements of current and forthcoming elastic processes. 1 Significance to the Field Resource-intensive tasks are nowadays not only common within scientific workflows, but are also getting more and more common in business processes. 1.For example, compute- and data-intensive analytical processes are found in the finance industry and in managing smart grids in the energy industry. In the latter case, data from a very large number of sensors needs to be gathered, processed and stored in real-time in order to offer consumers consumption reports or even guarantee grid stability [5]. As the number of active sensors differs during a day, the amount of data also fluctuates to a very large extent. Furthermore, certain processes or process steps are permitted to be postponed to the future, while others need to be carried out immediately. In such a scenario, the permanent provisioning of IT capacity able to handle peak system loads is obviously not the best solution, as the capacities will not be utilized most of the time. With the advent of Cloud Computing, organizations nowadays have got a much more cost-savvy alternative which allows them to make use of computing resources in an on-demand, utility-like fashion [1]
    corecore