2,892 research outputs found

    More on Graph Rewriting With Contextual Refinement

    Get PDF
    In GRGEN , a graph rewrite generator tool, rules have the outstandingfeature that variables in their pattern and replacement graphs may be refined withmeta-rules based on contextual hyperedge replacement grammars. A refined rule maydelete, copy, and transform subgraphs of unbounded size and of variable shape. Inthis paper, we show that rules with contextual refinement can be transformed to stan-dard graph rewrite rules that perform the refinement incrementally, and are appliedaccording to a strategy called residual rewriting. With this transformation, it is possi-ble to state precisely whether refinements can be determined in finitely many steps ornot, and whether refinements are unique for every form of refined pattern or not

    Graph Rewriting with Contextual Refinement

    Get PDF
    In the standard theory of graph transformation, a rule modifies only subgraphs of constant size and fixed shape.  The rules supported by the graph-rewriting tool GrGen are far more expressive: they may modify subgraphs of unbounded size and variable shape. Therefore properties like termination and confluence cannot be analyzed as for the standard case. In order to lift such results, we formalize the outstanding feature of GrGen rules by using plain rules on two levels: schemata} are rules with variables; they are refined with meta-rules, which are based on contextual hyperedge replacement, before they are used for rewriting.We show that every rule based on single pushouts, on neighborhood-controlled embedding, or on variable substitution can be modeled by a schema with appropriate meta-rules. It turns out that the question whether schemata may have overlapping refinements is not decidable

    A knowledge representation meta-model for rule-based modelling of signalling networks

    Full text link
    The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes--at least apparently--inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers--each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.Comment: In Proceedings DCM 2015, arXiv:1603.0053

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Distributed execution of bigraphical reactive systems

    Get PDF
    The bigraph embedding problem is crucial for many results and tools about bigraphs and bigraphical reactive systems (BRS). Current algorithms for computing bigraphical embeddings are centralized, i.e. designed to run locally with a complete view of the guest and host bigraphs. In order to deal with large bigraphs, and to parallelize reactions, we present a decentralized algorithm, which distributes both state and computation over several concurrent processes. This allows for distributed, parallel simulations where non-interfering reactions can be carried out concurrently; nevertheless, even in the worst case the complexity of this distributed algorithm is no worse than that of a centralized algorithm
    • …
    corecore