695 research outputs found
DecSerFlow: Towards a Truly Declarative Service Flow Language
The need for process support in the context of web services
has triggered the development of many languages, systems, and standards.
Industry has been developing software solutions and proposing
standards such as BPEL, while researchers have been advocating the
use of formal methods such as Petri nets and pi-calculus. The languages
developed for service flows, i.e., process specification languages for web
services, have adopted many concepts from classical workflow management
systems. As a result, these languages are rather procedural and
this does not fit well with the autonomous nature of services. Therefore,
we propose DecSerFlow as a Declarative Service Flow Language. DecSerFlow
can be used to specify, enact, and monitor service flows. The
language is extendible (i.e., constructs can be added without changing
the engine or semantical basis) and can be used to enforce or to check the
conformance of service flows. Although the language has an appealing
graphical representation, it is grounded in temporal logic
Finding suitable activity clusters for decomposed process discovery
Event data can be found in any information system and provide the starting point for a range of process mining techniques. The widespread availability of large amounts of event data also creates new challenges. Existing process mining techniques are often unable to handle "big event data" adequately. Decomposed process mining aims to solve this problem by decomposing the process mining problem into many smaller problems which can be solved in less time, using less resources, or even in parallel. Many decomposed process mining techniques have been proposed in literature. Analysis shows that even though the decomposition step takes a relatively small amount of time, it is of key importance in finding a high-quality process model and for the computation time required to discover the individual parts. Currently there is no way to assess the quality of a decomposition beforehand. We define three quality notions that can be used to assess a decomposition, before using it to discover a model or check conformance with. We then propose a decomposition approach that uses these notions and is able to find a high-quality decomposition in little time. Keywords: decomposed process mining, decomposed process discovery, distributed computing, event lo
Enhancing workflow-nets with data for trace completion
The growing adoption of IT-systems for modeling and executing (business)
processes or services has thrust the scientific investigation towards
techniques and tools which support more complex forms of process analysis. Many
of them, such as conformance checking, process alignment, mining and
enhancement, rely on complete observation of past (tracked and logged)
executions. In many real cases, however, the lack of human or IT-support on all
the steps of process execution, as well as information hiding and abstraction
of model and data, result in incomplete log information of both data and
activities. This paper tackles the issue of automatically repairing traces with
missing information by notably considering not only activities but also data
manipulated by them. Our technique recasts such a problem in a reachability
problem and provides an encoding in an action language which allows to
virtually use any state-of-the-art planning to return solutions
On the Common Support of Workflow Type and Instance Changes under Correctness Constraints
The capability to rapidly adapt in-progress workflows (WF)
is an essential requirement for any workflow system. Adaptations may concern single WF instances or a WF type as a whole. Especially for long-running business processes it is indispensable to propagate WF type changes to in-progress WF instances as well. Very challenging in this context is to correctly adapt a (potentially large) collection of WF
instances, which may be in different states and to which various ad-hoc changes may have been previously applied. This paper presents a generic framework for the common support of both WF type and WF instance changes. We establish fundamental correctness principles, position formal theorems, and show how WF instances can be automatically and efficiently migrated to a modified WF schema. The adequate treatment of conflicting WF type and WF instance changes adds to the overall completeness of our approach. By offering more flexibility and adaptability the so promising WF technology will finally deliver
RALph: A Graphical Notation for Resource Assignments in Business Processes
The business process (BP) resource perspective deals with the management of human as well as non-human resources throughout the process lifecycle. Although it has received increasing attention recently, there exists no graphical notation for it up until now that is both expressive enough to cover well-known resource selection conditions and independent of the BP modelling language. In this paper, we introduce RALph, a graphical notation for the assignment of human resources to BP activities. We define its semantics by mapping this notation to a language that has been formally defined in description logics, which enables its automated analysis. Although we show how RALph can be seamlessly integrated with BPMN, it is noteworthy that the notation is independent of the BP modelling language. Altogether, RALph will foster the visual modelling of the resource perspective in BP
Service Discovery Using Communication Fingerprints
A request to a service registry must be answered with a service that fits in several regards, including semantic compatibility, non-functional compatibility, and interface compatibility. In the case of stateful services, there is the additional need to check behavioral (i.e. protocol) compatibility. This paper is concerned with the latter aspect. An apparent approach to establishing behavioral compatibility would be to apply the well-known technology of model checking to a composition of the provided service and the requesting service. However, this procedure must potentially be repeated for all provided services in the registry which may unprohibitively slow down the response time of the broker. Hence, we propose to insert a preprocessing step. It consists of computing an abstraction of the behavior for each published service that we call communication fingerprint. Upon request, we use the fingerprint to rule out as many as possible incompatible services thus reducing the number of candidates that need to be model checked for behavioral compatibility. The technique is based on linear programming and is thus extremely efficient. We validate our approach on a large set of services that we cut out of real world business processes
Recommended from our members
Climate forecasts in disaster management: Red Cross flood operations in West Africa, 2018
In 2008, the International Federation of Red Cross and Red Crescent Societies (IFRC) used a seasonal forecast for West Africa for the first time to implement an Early Warning, Early Action strategy for enhanced flood preparedness and response. Interviews with disaster managers suggest that this approach improved their capacity and response. Relief supplies reached flood victims within days, as opposed to weeks in previous years, thereby preventing further loss of life, illness, and setbacks to livelihoods, as well as augmenting the efficiency of resource use. This case demonstrates the potential benefits to be realised from the use of medium-to-long-range forecasts in disaster management, especially in the context of potential increases in extreme weather and climate-related events due to climate variability and change. However, harnessing the full potential of these forecasts will require continued effort and collaboration among disaster managers, climate service providers, and major humanitarian donors
- …