6,963 research outputs found

    A Two Phase Verification Algorithm for Cyclic Workflow Graphs

    Get PDF
    The widespread automation of e-business processes has made workflow analysis and design an integral part of information management. Graph-based workflow models enables depicting complex processes in a fairly compact form. This free form, on the other hand, can yield models that may fail depending on the judgment of the modeler and create modeling situations that cannot be executed or will behave in a manner not expected by the modeler. Further, cycles in workflow models needed for purposes of rework and information feedback increase the complexity of workflow analysis. This paper presents a novel method of partitioning a cyclic workflow process, represented in a directed graph, into a set of acyclic subgraphs. This allows a cyclic workflow model to be analyzed further with several smaller subflows, which are all acyclic. As a convincing example, we present two-phased verification of structural conflicts in workflow models for those incurred from the inappropriate composition of partitioned flows and the others within each acyclic subgraph, which is much easier to comprehend and verify individually than the whole workflow model, in general

    Doctor of Philosophy

    Get PDF
    dissertationServing as a record of what happened during a scientific process, often computational, provenance has become an important piece of computing. The importance of archiving not only data and results but also the lineage of these entities has led to a variety of systems that capture provenance as well as models and schemas for this information. Despite significant work focused on obtaining and modeling provenance, there has been little work on managing and using this information. Using the provenance from past work, it is possible to mine common computational structure or determine differences between executions. Such information can be used to suggest possible completions for partial workflows, summarize a set of approaches, or extend past work in new directions. These applications require infrastructure to support efficient queries and accessible reuse. In order to support knowledge discovery and reuse from provenance information, the management of those data is important. One component of provenance is the specification of the computations; workflows provide structured abstractions of code and are commonly used for complex tasks. Using change-based provenance, it is possible to store large numbers of similar workflows compactly. This storage also allows efficient computation of differences between specifications. However, querying for specific structure across a large collection of workflows is difficult because comparing graphs depends on computing subgraph isomorphism which is NP-Complete. Graph indexing methods identify features that help distinguish graphs of a collection to filter results for a subgraph containment query and reduce the number of subgraph isomorphism computations. For provenance, this work extends these methods to work for more exploratory queries and collections with significant overlap. However, comparing workflow or provenance graphs may not require exact equality; a match between two graphs may allow paired nodes to be similar yet not equivalent. This work presents techniques to better correlate graphs to help summarize collections. Using this infrastructure, provenance can be reused so that users can learn from their own and others' history. Just as textual search has been augmented with suggested completions based on past or common queries, provenance can be used to suggest how computations can be completed or which steps might connect to a given subworkflow. In addition, provenance can help further science by accelerating publication and reuse. By incorporating provenance into publications, authors can more easily integrate their results, and readers can more easily verify and repeat results. However, reusing past computations requires maintaining stronger associations with any input data and underlying code as well as providing paths for migrating old work to new hardware or algorithms. This work presents a framework for maintaining data and code as well as supporting upgrades for workflow computations

    Modelling and Analysis Using GROOVE

    Get PDF
    In this paper we present case studies that describe how the graph transformation tool GROOVE has been used to model problems from a wide variety of domains. These case studies highlight the wide applicability of GROOVE in particular, and of graph transformation in general. They also give concrete templates for using GROOVE in practice. Furthermore, we use the case studies to analyse the main strong and weak points of GROOVE

    Diagnosing correctness of semantic workflow models

    Get PDF
    To model operational business processes in an accurate way, workflow models need to reference both the control flow and dataflow perspectives. Checking the correctness of such workflow models and giving precise feedback in case of errors is challenging due to the interplay between these different perspectives. In this paper, we propose a fully automated approach for diagnosing correctness of semantic workflow models in which the semantics of activities are specified with pre and postconditions. The control flow and dataflow perspectives of a semantic workflow are modeled in an integrated way using Artificial Intelligence techniques (Integer Programming and Constraint Programming). The approach has been implemented in the DiagFlow tool, which reads and diagnoses annotated XPDL models, using a state-of-the-art constraint solver as back end. Using this novel approach, complex semantic workflow models can be verified and diagnosed in an efficient way.Ministerio de Educación y Ciencia TIN2009-1371

    GRAPES-DD: exploiting decision diagrams for index-driven search in biological graph databases

    Get PDF
    BACKGROUND: Graphs are mathematical structures widely used for expressing relationships among elements when representing biomedical and biological information. On top of these representations, several analyses are performed. A common task is the search of one substructure within one graph, called target. The problem is referred to as one-to-one subgraph search, and it is known to be NP-complete. Heuristics and indexing techniques can be applied to facilitate the search. Indexing techniques are also exploited in the context of searching in a collection of target graphs, referred to as one-to-many subgraph problem. Filter-and-verification methods that use indexing approaches provide a fast pruning of target graphs or parts of them that do not contain the query. The expensive verification phase is then performed only on the subset of promising targets. Indexing strategies extract graph features at a sufficient granularity level for performing a powerful filtering step. Features are memorized in data structures allowing an efficient access. Indexing size, querying time and filtering power are key points for the development of efficient subgraph searching solutions.RESULTS: An existing approach, GRAPES, has been shown to have good performance in terms of speed-up for both one-to-one and one-to-many cases. However, it suffers in the size of the built index. For this reason, we propose GRAPES-DD, a modified version of GRAPES in which the indexing structure has been replaced with a Decision Diagram. Decision Diagrams are a broad class of data structures widely used to encode and manipulate functions efficiently. Experiments on biomedical structures and synthetic graphs have confirmed our expectation showing that GRAPES-DD has substantially reduced the memory utilization compared to GRAPES without worsening the searching time.CONCLUSION: The use of Decision Diagrams for searching in biochemical and biological graphs is completely new and potentially promising thanks to their ability to encode compactly sets by exploiting their structure and regularity, and to manipulate entire sets of elements at once, instead of exploring each single element explicitly. Search strategies based on Decision Diagram makes the indexing for biochemical graphs, and not only, more affordable allowing us to potentially deal with huge and ever growing collections of biochemical and biological structures

    Vérification efficace de systèmes à compteurs à l'aide de relaxations

    Get PDF
    Abstract : Counter systems are popular models used to reason about systems in various fields such as the analysis of concurrent or distributed programs and the discovery and verification of business processes. We study well-established problems on various classes of counter systems. This thesis focusses on three particular systems, namely Petri nets, which are a type of model for discrete systems with concurrent and sequential events, workflow nets, which form a subclass of Petri nets that is suited for modelling and reasoning about business processes, and continuous one-counter automata, a novel model that combines continuous semantics with one-counter automata. For Petri nets, we focus on reachability and coverability properties. We utilize directed search algorithms, using relaxations of Petri nets as heuristics, to obtain novel semi-decision algorithms for reachability and coverability, and positively evaluate a prototype implementation. For workflow nets, we focus on the problem of soundness, a well-established correctness notion for such nets. We precisely characterize the previously widely-open complexity of three variants of soundness. Based on our insights, we develop techniques to verify soundness in practice, based on reachability relaxation of Petri nets. Lastly, we introduce the novel model of continuous one-counter automata. This model is a natural variant of one-counter automata, which allows reasoning in a hybrid manner combining continuous and discrete elements. We characterize the exact complexity of the reachability problem in several variants of the model.Les systèmes à compteurs sont des modèles utilisés afin de raisonner sur les systèmes de divers domaines tels l’analyse de programmes concurrents ou distribués, et la découverte et la vérification de systèmes d’affaires. Nous étudions des problèmes bien établis de différentes classes de systèmes à compteurs. Cette thèse se penche sur trois systèmes particuliers : les réseaux de Petri, qui sont un type de modèle pour les systèmes discrets à événements concurrents et séquentiels ; les « réseaux de processus », qui forment une sous-classe des réseaux de Petri adaptée à la modélisation et au raisonnement des processus d’affaires ; les automates continus à un compteur, un nouveau modèle qui combine une sémantique continue à celles des automates à un compteur. Pour les réseaux de Petri, nous nous concentrons sur les propriétés d’accessibilité et de couverture. Nous utilisons des algorithmes de parcours de graphes, avec des relaxations de réseaux de Petri comme heuristiques, afin d’obtenir de nouveaux algorithmes de semi-décision pour l’accessibilité et la couverture, et nous évaluons positivement un prototype. Pour les «réseaux de processus», nous nous concentrons sur le problème de validité, une notion de correction bien établie pour ces réseaux. Nous caractérisions précisément la complexité calculatoire jusqu’ici largement ouverte de trois variantes du problème de validité. En nous basant sur nos résultats, nous développons des techniques pour vérifier la validité en pratique, à l’aide de relaxations d’accessibilité dans les réseaux de Petri. Enfin, nous introduisons le nouveau modèle d’automates continus à un compteur. Ce modèle est une variante naturelle des automates à un compteur, qui permet de raisonner de manière hybride en combinant des éléments continus et discrets. Nous caractérisons la complexité exacte du problème d’accessibilité dans plusieurs variantes du modèle
    corecore