986 research outputs found

    A Connection of Task-centric with Artefact-centric Models through Semantic Task Specification and its Use for Formal Verification

    Get PDF
    Task- and artefact-centric business process models (BPMs) are mostly used in isolation. This entails, e.g., problems with formal and automated verification of BPMs through model checking. We address this gap through semantic task specification, which is transferred from more widely known semantic service specification. In summary, we present a new and systematic approach for connecting a task-centric BPM (in BPMN) with a model of an artefact-centric object life cycle through semantic task specification. As a consequence, we achieve a seamless approach for formal and automated verification of BPMs using model checking

    Reusing artifact-centric business process models : a behavioral consistent specialization approach

    Get PDF
    Process reuse is one of the important research areas that address efficiency issues in business process modeling. Similar to software reuse, business processes should be able to be componentized and specialized in order to enable flexible process expansion and customization. Current activity/control-flow centric workflow modeling approaches face difficulty in supporting highly flexible process reuse, limited by their procedural nature. In comparison, the emerging artifact-centric workflow modeling approach well fits into these reuse requirements. Beyond the classic class level reuse in existing object-oriented approaches, process reuse faces the challenge of handling synchronization dependencies among artifact lifecycles as parts of a business process. In this article, we propose a theoretical framework for business process specialization that comprises an artifact-centric business process model, a set of methods to design and construct a specialized business process model from a base model, and a set of behavioral consistency criteria to help check the consistency between the two process models. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature

    Conformance checking in UML artifact-centric business process models

    Get PDF
    Business artifacts have appeared as a new paradigm to capture the information required for the complete execution and reasoning of a business process. Likewise, conformance checking is gaining popularity as a crucial technique that enables evaluating whether recorded executions of a process match its corresponding model. In this paper, conformance checking techniques are incorporated into a general framework to specify business artifacts. By relying on the expressive power of an artifact-centric specification, BAUML, which combines UML state and activity diagrams (among others), the problem of conformance checking can be mapped into the Petri net formalism and its results be explained in terms of the original artifact-centric specification. In contrast to most existing approaches, ours incorporates data constraints into the Petri nets, thus achieving conformance results which are more precise. We have also implemented a plug-in, within the ProM framework, which is able to translate a BAUML into a Petri net to perform conformance checking. This shows the feasibility of our approach.Peer ReviewedPostprint (author's final draft

    Service substitution : a behavioral approach based on Petri Nets

    Get PDF
    Service-Oriented Computing is an emerging computing paradigm that supports the modular design of (software) systems. Complex systems are designed by composing less complex systems, called services. Such a (complex) system is a distributed application often involving several cooperating enterprises. As a system usually changes over time, individual services will be substituted by other services. Substituting one service by another one should not affect the correctness of the overall system. Assuring correctness becomes particularly challenging, as the services rely on each other, and each of the involved enterprises only oversees a part of the overall system. In addition, services communicate asynchronously which makes the analysis even more difficult. For this reason, formal methods to support service substitution are indispensable. In this thesis, we study service substitution at the level of service models. Thereby we restrict ourselves to service behavior. As a formalism to model service behavior, we use Petri nets. The first contribution of this thesis is the definition of several substitutability criteria that are suitable in the context of Service-Oriented Computing. Substituting a service S by a service S0 should preserve some behavioral properties of the overall system. For each set of behavioral properties and a given service S, there exists a set of behaviorally compatible services for S. A substitutability criterion defines which of these behaviorally compatible services of S have to be preserved by S0. We relate our substitutability criteria to preorders and equivalences known from process theory. The second contribution of this thesis is to present, for each substitutability criterion, a procedure to decide whether a service S0 can substitute a service S. The decision requires the comparison of the in general infinite sets of behaviorally compatible services for the services S and S0. Hence, we extend existing work on an abstract representation of all behaviorally compatible services for a given service. For each notion of behavioral compatibility, we present an algorithmic solution to represent all behaviorally compatible services. Based on these representations, we can decide substitutability of a service S by a service S0. The third contribution of this thesis is a method to support the design of a service S0 that can substitute a service S according to a substitutability criterion. Our approach is to derive a service S0 from the service S by stepwise transformation. To this end, we present several transformation rules. Finally, we formalize and we extend the equivalence notion for services specified in the language WS-BPEL. That way, we demonstrate the applicability of our work

    A Practical Data-Flow Verification Scheme for Business Processes

    Get PDF
    Data in business processes is becoming more and more important. Current standards for process-modeling languages like BPMN 2.0 which include the data flow reflect this. Ensuring the correctness of the data flow in processes is challenging. Model checking, i. e., verifying properties of process models, is a well-known technique to this end. An important part of model checking is the construction of the state space of the model. State-space explosion however typically is in the way of an effective verification. We study how to overcome this problem in our context by means of reduction. More specifically, we propose a reduction on the level of the process model. To our knowledge, this is new for the data-flow analysis of processes. To accomplish this, we specify regions relevant for the verification of properties describing the data flow. Our evaluation shows that our approach works well on real process models

    A Machine-Checked, Type-Safe Model of Java Concurrency : Language, Virtual Machine, Memory Model, and Verified Compiler

    Get PDF
    The Java programming language provides safety and security guarantees such as type safety and its security architecture. They distinguish it from other mainstream programming languages like C and C++. In this work, we develop a machine-checked model of concurrent Java and the Java memory model and investigate the impact of concurrency on these guarantees. From the formal model, we automatically obtain an executable verified compiler to bytecode and a validated virtual machine

    Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics

    Full text link
    Neural activity patterns related to behavior occur at many scales in time and space from the atomic and molecular to the whole brain. Here we explore the feasibility of interpreting neurophysiological data in the context of many-body physics by using tools that physicists have devised to analyze comparable hierarchies in other fields of science. We focus on a mesoscopic level that offers a multi-step pathway between the microscopic functions of neurons and the macroscopic functions of brain systems revealed by hemodynamic imaging. We use electroencephalographic (EEG) records collected from high-density electrode arrays fixed on the epidural surfaces of primary sensory and limbic areas in rabbits and cats trained to discriminate conditioned stimuli (CS) in the various modalities. High temporal resolution of EEG signals with the Hilbert transform gives evidence for diverse intermittent spatial patterns of amplitude (AM) and phase modulations (PM) of carrier waves that repeatedly re-synchronize in the beta and gamma ranges at near zero time lags over long distances. The dominant mechanism for neural interactions by axodendritic synaptic transmission should impose distance-dependent delays on the EEG oscillations owing to finite propagation velocities. It does not. EEGs instead show evidence for anomalous dispersion: the existence in neural populations of a low velocity range of information and energy transfers, and a high velocity range of the spread of phase transitions. This distinction labels the phenomenon but does not explain it. In this report we explore the analysis of these phenomena using concepts of energy dissipation, the maintenance by cortex of multiple ground states corresponding to AM patterns, and the exclusive selection by spontaneous breakdown of symmetry (SBS) of single states in sequences.Comment: 31 page

    Design of experiment in production process innovation

    Get PDF
    In his famous book Design and Analysis of Experiments, Montgomery describes Design of Experiment (DOE) as a broad approach to an experiment, starting from the recognition of and statement of the problem, going through the experimental design and to the possible solution, ending to conclusion and recommendations. Specifically, DOE is known to be a powerful instrument based on statistics to design and analyze experiments. Potentiality of DOE is well known and appreciated among scholars. In some fields its potentiality is recognized and appreciated also by practitioners. That’s why there is an extensive use of Design of Experiment in improvement of industrial process quality. According to the definition given by Bisgaard, innovation is the complete process of development and eventual commercialization of new products and services, new methods of production or provision, new methods of transportation or service delivery, new business models, new markets, or new forms of organization. While the use of DOE is well spread in industrial experimentation to improve quality and robustness of processes, the advantage of using DOE for innovation is debated among scholars and among practitioners. The idea of investigating the use of DOE for production process innovation arose from this debate. Different perspectives have been investigated. The effectiveness of DOE to support and enhance the innovation of a production process is highlighted by means of a case study in which a strategy to innovate a thermoforming process for the production of a functional packaging has been developed. DOE enhanced innovation capability allowing reduction of systematic errors and distortions, full exploration of factorial space, and reduction of number of tests. DOE allowed to identify and overcome the mismatch between control factors in laboratory and in production line. Another perspective was the management of the innovation process. The positive impact on innovation process management of adoption of DOE is shown by means of a case study. DOE proved to be helpful providing proper instruments, and impacting on five dimensions typical of managerial field. Namely: decision making, integration, communication, time and cost, and knowledge management. Concerning the data analysis, some nonparametric methods of analysis have been investigated. A simulation study was used to compare some advanced univariate nonparamentric tests in a crossed factorial design. The study revealed that certain methods of analysis perform better than others depending on the data set and on the objective of the analysis. As a consequence, there does not emerge a unique approach in the design phase of the experiment, but various aspects have to be taken into account simultaneously. A thoughtful choice of the most suitable test enhances the positive impact that DOE has on the innovation of a production process. Furthermore, a novel multivariate nonparametric approach based on NonParametric Combination (NPC) applied to Synchronized Permutation (SP) tests for two-way crossed factorial design was developed. It revealed to be a good instrument for inferential statistics when assumptions of MANOVA are violated. A great advantage given by the adoption of these tests is that they well perform with small sample size. This reflects the frequent needs of practitioners in the industrial environment where there are constraints or limited resources for the experimental design. Furthermore, there is an important property of NPC of SP tests that can be exploited to increase their power: the finite sample consistency. Indeed, an increase in rejection rate can be observed under alternative hypothesis when the number of response variables increases with fixed number of observed units. Properties of this multivariate test make of it a useful instrument when using DOE to innovate a production process and some specific conditions are verified

    Theory and Practice of Transactional Method Caching

    Get PDF
    Nowadays, tiered architectures are widely accepted for constructing large scale information systems. In this context application servers often form the bottleneck for a system's efficiency. An application server exposes an object oriented interface consisting of set of methods which are accessed by potentially remote clients. The idea of method caching is to store results of read-only method invocations with respect to the application server's interface on the client side. If the client invokes the same method with the same arguments again, the corresponding result can be taken from the cache without contacting the server. It has been shown that this approach can considerably improve a real world system's efficiency. This paper extends the concept of method caching by addressing the case where clients wrap related method invocations in ACID transactions. Demarcating sequences of method calls in this way is supported by many important application server standards. In this context the paper presents an architecture, a theory and an efficient protocol for maintaining full transactional consistency and in particular serializability when using a method cache on the client side. In order to create a protocol for scheduling cached method results, the paper extends a classical transaction formalism. Based on this extension, a recovery protocol and an optimistic serializability protocol are derived. The latter one differs from traditional transactional cache protocols in many essential ways. An efficiency experiment validates the approach: Using the cache a system's performance and scalability are considerably improved
    corecore