20 research outputs found

    The geometry of the Barbour-Bertotti theories I. The reduction process

    Get PDF
    The dynamics of N3N\geq 3 interacting particles is investigated in the non-relativistic context of the Barbour-Bertotti theories. The reduction process on this constrained system yields a Lagrangian in the form of a Riemannian line element. The involved metric, degenerate in the flat configuration space, is the first fundamental form of the space of orbits of translations and rotations (the Leibniz group). The Riemann tensor and the scalar curvature are computed by a generalized Gauss formula in terms of the vorticity tensors of generators of the rotations. The curvature scalar is further given in terms of the principal moments of inertia of the system. Line configurations are singular for N3N\neq 3. A comparison with similar methods in molecular dynamics is traced.Comment: 15 pages, to appear in Classical and Quantum Gravit

    An evolutionary technique to approximate multiple optimal alignments

    Get PDF
    The alignment of observed and modeled behavior is an essential aid for organizations, since it opens the door for root-cause analysis and enhancement of processes. The state-of-the-art technique for computing alignments has exponential time and space complexity, hindering its applicability for medium and large instances. Moreover, the fact that there may be multiple optimal alignments is perceived as a negative situation, while in reality it may provide a more comprehensive picture of the model’s explanation of observed behavior, from which other techniques may benefit. This paper presents a novel evolutionary technique for approximating multiple optimal alignments. Remarkably, the memory footprint of the proposed technique is bounded, representing an unprecedented guarantee with respect to the state-of-the-art methods for the same task. The technique is implemented into a tool, and experiments on several benchmarks are provided.Peer ReviewedPostprint (author's final draft

    Approximate computation of alignments of business processes through relaxation labelling

    Get PDF
    A fundamental problem in conformance checking is aligning event data with process models. Unfortunately, existing techniques for this task are either complex, or can only be applicable to restricted classes of models. This in practice means that for large inputs, current techniques often fail to produce a result. In this paper we propose a method to approximate alignments for unconstrained process models, which relies on the use of relaxation labelling techniques on top of a partial order representation of the process model. The implementation on the proposed technique achieves a speed-up of several orders of magnitude with respect to the approaches in the literature (either optimal or approximate), often with a reasonable trade-off on the cost of the obtained alignment.Peer ReviewedPostprint (author's final draft

    Encoding conformance checking artefacts in SAT

    Get PDF
    Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.Peer ReviewedPostprint (author's final draft

    Encoding conformance checking artefacts in SAT

    Get PDF
    Conformance checking strongly relies on the computation of artefacts, which enable reasoning on the relation between observed and modeled behavior. This paper shows how important conformance artefacts like alignments, anti-alignments or even multi-alignments, defined over the edit distance, can be computed by encoding the problem as a SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The prototype implementation of the techniques presented in this paper show capabilities for dealing with some of the current benchmarks, and potential for the near future when optimizations similar to the ones in the literature are incorporated.Peer ReviewedPostprint (author's final draft

    Discovering Automatable Routines From User Interaction Logs

    Get PDF
    The complexity and rigidity of legacy applications in modern organizations engender situations where workers need to perform repetitive routines to transfer data from one application to another via their user interfaces, e.g. moving data from a spreadsheet to a Web application or vice-versa. Discovering and automating such routines can help to eliminate tedious work, reduce cycle times, and improve data quality. Advances in Robotic Process Automation (RPA) technology make it possible to conveniently automate such routines, but not to discover them in the first place. This paper presents a method to analyse user interactions in order to discover routines that are fully deterministic and thus amenable to automation. The proposed method identifies sequences of actions that are always triggered when a given activation condition holds and such that the parameters of each action can be deterministically derived from data produced by previous actions. To this end, the method combines a technique for compressing a set of sequences into an acyclic automaton, with techniques for rule mining and for discovering data transformations. An initial evaluation shows that the method can discover automatable routines from user interaction logs with acceptable execution times, particularly when there are one-to-one correspondences between parameters of an action and those of previous actions, which is the case of copy pasting routines

    Efficiently computing alignments:algorithm and datastructures

    No full text
    \u3cp\u3eConformance checking is considered to be anything where observed behaviour needs to be related to already modelled behaviour. Fundamental to conformance checking are alignments which provide a precise relation between a sequence of activities observed in an event log and a execution sequence of a model. However, computing alignments is a complex task, both in time and memory, especially when models contain large amounts of parallelism. In this tool paper we present the actual algorithm and memory structures used for the experiments of [15]. We discuss the time complexity of the algorithm, as well as the space and time complexity of the main data structures. We further present the integration in ProM and a basic code snippet in Java for computing alignments from within any tool.\u3c/p\u3

    Efficiently computing alignments:using the extended marking equation

    No full text
    \u3cp\u3eConformance checking is considered to be anything where observed behaviour needs to be related to already modelled behaviour. Fundamental to conformance checking are alignments which provide a precise relation between a sequence of activities observed in an event log and a execution sequence of a model. However, computing alignments is a complex task, both in time and memory, especially when models contain large amounts of parallelism. When computing alignments for Petri nets, (Integer) Linear Programming problems based on the marking equation are typically used to guide the search. Solving such problems is the main driver for the time complexity of alignments. In this paper, we adopt existing work in such a way that (a) the extended marking equation is used rather than the marking equation and (b) the number of linear problems that is solved is kept at a minimum. To do so, we exploit fundamental properties of the Petri nets and we show that we are able to compute optimal alignments for models for which this was previously infeasible. Furthermore, using a large collection of benchmark models, we empirically show that we improve on the state-of-the-art in terms of time and memory complexity.\u3c/p\u3
    corecore