813 research outputs found

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time

    Clustering-Based Predictive Process Monitoring

    Full text link
    Business process enactment is generally supported by information systems that record data about process executions, which can be extracted as event logs. Predictive process monitoring is concerned with exploiting such event logs to predict how running (uncompleted) cases will unfold up to their completion. In this paper, we propose a predictive process monitoring framework for estimating the probability that a given predicate will be fulfilled upon completion of a running case. The predicate can be, for example, a temporal logic constraint or a time constraint, or any predicate that can be evaluated over a completed trace. The framework takes into account both the sequence of events observed in the current trace, as well as data attributes associated to these events. The prediction problem is approached in two phases. First, prefixes of previous traces are clustered according to control flow information. Secondly, a classifier is built for each cluster using event data to discriminate between fulfillments and violations. At runtime, a prediction is made on a running case by mapping it to a cluster and applying the corresponding classifier. The framework has been implemented in the ProM toolset and validated on a log pertaining to the treatment of cancer patients in a large hospital

    Aggregation and Adaptation of Web Services

    Get PDF
    Service-oriented computing highly supports the development of future business applications through the use of (Web) services. Two main challenges for Web services are the aggregation of services into new (complex) business applications, and the adaptation of services presenting various types of interaction mismatches. The ultimate objective of this thesis is to define a methodology for the semi-automated aggregation and adaptation of Web services capable of suitably overcoming semantic and behaviour mismatches in view of business process integration within and across organisational boundaries. We tackle the aggregation and adaptation of services described by service contracts, which consist of signature (WSDL), ontology information (OWL), and behaviour specification (YAWL). We first describe an aggregation technique that automatically generates contracts of composite services satisfying (behavioural) client requests from a registry of service contracts. Further on, we present a behaviour-aware adaptation technique that supports the customisation of services to fulfil client requests. The adaptation technique can be used to adapt the behaviour of services to satisfy both functional and behavioural requests. In order to support the generation of service contracts from real-world service descriptions, we also introduce a pattern-based compositional translator for the automated generation of YAWL workflows from BPEL business processes. In this way, we pave the way for the formal analysis, aggregation, and adaptation of BPEL processes

    Event stream-based process discovery using abstract representations

    Get PDF
    The aim of process discovery, originating from the area of process mining, is to discover a process model based on business process execution data. A majority of process discovery techniques relies on an event log as an input. An event log is a static source of historical data capturing the execution of a business process. In this paper, we focus on process discovery relying on online streams of business process execution events. Learning process models from event streams poses both challenges and opportunities, i.e. we need to handle unlimited amounts of data using finite memory and, preferably, constant time. We propose a generic architecture that allows for adopting several classes of existing process discovery techniques in context of event streams. Moreover, we provide several instantiations of the architecture, accompanied by implementations in the process mining toolkit ProM (http://promtools.org). Using these instantiations, we evaluate several dimensions of stream-based process discovery. The evaluation shows that the proposed architecture allows us to lift process discovery to the streaming domain.</p

    Distributed task management by means of workflow atoms

    Full text link
    In this paper we describe Wf-ATOMS, a framework for the specification and management of workflows, whose engine is integrated in a multi user and distributed task management system. The process models include features standard in other workflow management systems, concerning the form in which the activities forming the processes interact, and other features usually managed by user-task management systems that model the different activities available in interactive applications. The conjunction of these two models provides several benefits. On one hand, there is a simplification in the development of workflow-based applications. On the other hand, it allows the systematic development of training applications for work teams that collaborate in the accomplishment of distributed processes. In this paper we describe both the framework from the point of view of the specification of distributed processes and the underlying architecture of the process management system. Wf-ATOMS has been developed as an extension of ATOMS, a previous framework for the management of user tasks in interactive applicationsThis work has been partially supported by the Plan National de InvestigaciĂłn, projects TIC96-0723-C02-01/02 and TEL97-030

    Automated Certification for Compliant Cloud-based Business Processes

    Get PDF
    A key problem in the deployment oflarge-scale, reliable cloud computingconcerns the difficulty to certify thecompliance of business processes operatingin the cloud. Standard auditprocedures such as SAS-70 and SAS-117 are hard to conduct for cloudbasedprocesses. The paper proposesa novel approach to certify the complianceof business processes with regulatoryrequirements. The approach translatesprocess models into their correspondingPetri net representationsand checks them against requirementsalso expressed in this formalism. Beingbased on Petri nets, the approach provideswell-founded evidence on adherenceand, in case of noncompliance, indicatesthe possible vulnerabilities

    Modelling the behaviour of management operations in cloud-based applications

    Get PDF
    How to flexibly manage complex applications over heterogeneous clouds is one of the emerging problems in the cloud era. The OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) aims at solving this problem by providing a language to describe and manage complex cloud applications in a portable, vendoragnostic way. TOSCA permits to define an application as an orchestration of nodes, whose types can specify states, requirements, capabilities and management operations — but not how they interact each another. In this paper we first propose how to extend TOSCA to specify the behaviour of management operations and their relations with states, requirements, and capabilities. We then illustrate how such behaviour can be naturally modelled, in a compositional way, by means of open Petri nets. The proposed modelling permits to automate different analyses, such as determining whether a deployment plan is valid, which are its effects, or which plans allow to reach certain system configurations

    A Goal-Oriented Approach for Adaptive SLA Monitoring : a Cloud Provider Case Study

    Get PDF
    National audienceWe argue in this paper that autonomic systems need to make their integrated monitoring adaptive in order to improve their “comprehensive” Quality of Service (QoS). We propose to design this adaptation based on high level objectives (called goals) related to the management of both the “functional system QoS” and the “monitoring system QoS”. Starting from some previous works suggesting a model-driven adaptable monitoring framework composed of 3 layers (configurability, adaptability, governability), we introduce a methodology to identify the functional and monitoring high level goals (according to the agreed Service Level Agreement - SLA) in order to drive models' instantiation. This proposal is first applied to a cloud provider case study for which two high level goals are developed (respect metrics freshness and minimize monitoring cost), and then simulated to show how the quality of management decisions, as well as intelligent monitoring of dynamic SLA, could be improved
    • …
    corecore