70 research outputs found

    A Grammatical Model for the Specification of Administrative Workflow Using Scenario as Modelling Unit

    Get PDF
    International audienceProcess modelling is a crucial phase of Business Process Management (BPM). Despite the many efforts made in producing process modelling tools, existing tools (languages) are not commonly accepted. They are mainly criticised for their inability to specify both the tasks making up the processes and their scheduling (their lifecycle models), the data they manipulate (their information models) and their organizational models. Process modelling in these languages often results in a single task graph; such a graph can quickly become difficult to read and maintain. Moreover, these languages are often too general (they have a very high expressiveness); this makes their application to specific types of processes complex: especially for administrative processes. In this paper, we present a new language for administrative processes modelling that allows designers to specify the lifecycle, information and organizational models of such processes using a mathematical tool based on a variant of attributed grammars. The approach imposed by the new language requires the designer to subdivide his process into scenarios, then to model each scenario individually using a simple task graph (an annotated tree) from which a grammatical model is further derived. At each moment then, the designer manipulates only a scenario of the studied process: this approach is more intuitive and modular; it allows to produce task graphs that are more refined and therefore, more readable and easier to maintain

    Resilience Analysis of Service Oriented Collaboration Process Management systems

    Get PDF
    Collaborative business process management allows for the automated coordination of processes involving human and computer actors. In modern economies it is increasingly needed for this coordination to be not only within organizations but also to cross organizational boundaries. The dependence on the performance of other organizations should however be limited, and the control over the own processes is required from a competitiveness perspective. The main objective of this work is to propose an evaluation model for measuring a resilience of a Service Oriented Architecture (SOA) collaborative process management system. In this paper, we have proposed resilience analysis perspectives of SOA collaborative process systems, i.e. overall system perspective, individual process model perspective, individual process instance perspective, service perspective, and resource perspective. A collaborative incident and maintenance notification process system is reviewed for illustrating our resilience analysis. This research contributes to extend SOA collaborative business process management systems with resilience support, not only looking at quantification and identification of resilience factors, but also considering ways of improving the resilience of SOA collaborative process systems through measures at design and run-time

    A Framework for Online Conformance Checking

    Get PDF
    Conformance checking – a branch of process mining – focuses on establishing to what extent actual executions of a process are in line with the expected behavior of a reference model. Current conformance checking techniques only allow for a-posteriori analysis: the amount of (non-)conformant behavior is quantified after the completion of the process instance. In this paper we propose a framework for online conformance checking: not only do we quantify (non-)conformant behavior as the execution is running, we also restrict the computation to constant time complexity per event analyzed, thus enabling the online analysis of a stream of events. The framework is instantiated with ideas coming from the theory of regions, and state similarity. An implementation is available in ProM and promising results have been obtained.Peer ReviewedPostprint (author's final draft

    Data-Oriented Declarative Language for Optimizing Business Processes

    Get PDF
    There is a signifi cant number of declarative languages to describe business processes. They tend to be used when business processes need to be fl exible and adaptable, being not possible to use an imperative description. Declarative languages in business process have been traditionally used to describe the order of activities, specifi cally the order allowed or prohibited. Unfortunately, none of them is worried about a declarative description of exchanged data between the activities and how they can infl uence the model. In this paper, we analyse the data description capacity of a variety of declarative languages in business processes. Using this analysis, we have detected the necessity to include data exchanged aspects in the declarative descriptions. In order to solve the gap, we propose a Data-Oriented Optimization Declarative LanguagE, called DOODLE, which includes the process requirements referred to data description, and the possibility to include an optimization function about the process output data

    A Formal Approach to Support Interoperability in Scientific Meta-workflows

    Get PDF
    Scientific workflows orchestrate the execution of complex experiments frequently using distributed computing platforms. Meta-workflows represent an emerging type of such workflows which aim to reuse existing workflows from potentially different workflow systems to achieve more complex and experimentation minimizing workflow design and testing efforts. Workflow interoperability plays a profound role in achieving this objective. This paper is focused at fostering interoperability across meta-workflows that combine workflows of different workflow systems from diverse scientific domains. This is achieved by formalizing definitions of meta-workflow and its different types to standardize their data structures used to describe workflows to be published and shared via public repositories. The paper also includes thorough formalization of two workflow interoperability approaches based on this formal description: the coarse-grained and fine-grained workflow interoperability approach. The paper presents a case study from Astrophysics which successfully demonstrates the use of the concepts of meta-workflows and workflow interoperability within a scientific simulation platform

    The Repercussions of Business Process Modeling Notations on Mental Load and Mental Effort

    Get PDF
    Over the last decade, plenty business process modeling notations emerged for the documentation of business processes in enterprises. During the learning of a modeling notation, an individual is confronted with a cognitive load that has an impact on the comprehension of a notation with its underlying formalisms and concepts. To address the cognitive load, this paper presents the results from an exploratory study, in which a sample of 94 participants, divided into novices, intermediates, and experts, needed to assess process models expressed in terms of eight different process modeling notations, i.e., BPMN 2.0, Declarative Process Modeling, eGantt Charts, EPCs, Flow Charts, IDEF3, Petri Nets, and UML Activity Diagrams. The study focus was set on the subjective comprehensibility and accessibility of process models reflecting participant's cognitive load (i.e., mental load and mental effort). Based on the cognitive load, a factor reflecting the mental difficulty for comprehending process models in different modeling notations was derived. The results indicate that established modeling notations from industry (e.g., BPMN) should be the first choice for enterprises when striving for process management. Moreover, study insights may be used to determine which modeling notations should be taught for an introduction in process modeling or which notation is useful to teach and train process modelers or analysts. \keywords{Business Process Modeling Notations, Cognitive Load, Mental Load, Mental Effort, Human-centered Desig

    A Pragmatic, Scalable Approach to Correct-by-Construction Process Composition Using Classical Linear Logic Inference

    Get PDF
    The need for rigorous process composition is encountered in many situations pertaining to the development and analysis of complex systems. We discuss the use of Classical Linear Logic (CLL) for correct-by-construction resource-based process composition, with guaranteed deadlock freedom, systematic resource accounting, and concurrent execution. We introduce algorithms to automate the necessary inference steps for binary compositions of processes in parallel, conditionally, and in sequence. We combine decision procedures and heuristics to achieve intuitive and practically useful compositions in an applied setting.Comment: Post-proceedings paper presented at the 28th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2018), Frankfurt am Main, Germany, 4-6 September 2018 (arXiv:1808.03326). arXiv admin note: substantial text overlap with arXiv:1803.0261

    A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts.</p> <p>Results</p> <p>To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (<it>e.g</it>., for biomolecular sequences, alignments, structures) and functionality (<it>e.g</it>., to parse/write standard file formats).</p> <p>Conclusions</p> <p>PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at <url>http://muralab.org/PaPy</url>, and includes extensive documentation and annotated usage examples.</p
    • 

    corecore