2,071 research outputs found

    Avoiding Unnecessary Information Loss: Correct and Efficient Model Synchronization Based on Triple Graph Grammars

    Full text link
    Model synchronization, i.e., the task of restoring consistency between two interrelated models after a model change, is a challenging task. Triple Graph Grammars (TGGs) specify model consistency by means of rules that describe how to create consistent pairs of models. These rules can be used to automatically derive further rules, which describe how to propagate changes from one model to the other or how to change one model in such a way that propagation is guaranteed to be possible. Restricting model synchronization to these derived rules, however, may lead to unnecessary deletion and recreation of model elements during change propagation. This is inefficient and may cause unnecessary information loss, i.e., when deleted elements contain information that is not represented in the second model, this information cannot be recovered easily. Short-cut rules have recently been developed to avoid unnecessary information loss by reusing existing model elements. In this paper, we show how to automatically derive (short-cut) repair rules from short-cut rules to propagate changes such that information loss is avoided and model synchronization is accelerated. The key ingredients of our rule-based model synchronization process are these repair rules and an incremental pattern matcher informing about suitable applications of them. We prove the termination and the correctness of this synchronization process and discuss its completeness. As a proof of concept, we have implemented this synchronization process in eMoflon, a state-of-the-art model transformation tool with inherent support of bidirectionality. Our evaluation shows that repair processes based on (short-cut) repair rules have considerably decreased information loss and improved performance compared to former model synchronization processes based on TGGs.Comment: 33 pages, 20 figures, 3 table

    Proving Continuity of Coinductive Global Bisimulation Distances: A Never Ending Story

    Get PDF
    We have developed a notion of global bisimulation distance between processes which goes somehow beyond the notions of bisimulation distance already existing in the literature, mainly based on bisimulation games. Our proposal is based on the cost of transformations: how much we need to modify one of the compared processes to obtain the other. Our original definition only covered finite processes, but a coinductive approach allows us to extend it to cover infinite but finitary trees. After having shown many interesting properties of our distance, it was our intention to prove continuity with respect to projections, but unfortunately the issue remains open. Nonetheless, we have obtained several partial results that are presented in this paper.Comment: In Proceedings PROLE 2015, arXiv:1512.0617

    Limits of Preprocessing

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: This is a slightly longer version of a paper that appeared in the proceedings of AAAI 201

    Formal Analysis of Functional Behaviour for Model Transformations Based on Triple Graph Grammars - Extended Version

    Get PDF
    Triple Graph Grammars (TGGs) are a well-established concept for the specification of model transformations. In previous work we have formalized and analyzed already crucial properties of model transformations like termination, correctness and completeness, but functional behaviour - especially local confluence - is missing up to now. In order to close this gap we generate forward translation rules, which extend standard forward rules by translation attributes keeping track of the elements which have been translated already. In the first main result we show the equivalence of model transformations based on forward resp. forward translation rules. This way, an additional control structure for the forward transformation is not needed. This allows to apply critical pair analysis and corresponding tool support by the tool AGG. However, we do not need general local confluence, because confluence for source graphs not belonging to the source language is not relevant for the functional behaviour of a model transformation. For this reason we only have to analyze a weaker property, called translation confluence. This leads to our second main result, the functional behaviour of model transformations, which is applied to our running example, the model transformation from class diagrams to database models

    Completeness and Correctness of Model Transformations based on Triple Graph Grammars with Negative Application Conditions (Long Version)

    Get PDF
    Model transformations are a key concept for modular and distributed model driven development. In this context, triple graph grammars have been investigated and applied to several case studies and they show a convenient combination of formal and intuitive specification abilities. Especially the automatic derivation of forward and backward transformations out of just one specified set of rules for the integrated model simplifies the specification and enhances usability as well as consistency. Since negative application conditions (NACs) are key ingredient for many model transformations based on graph transformation we embed them in the concept of triple graph grammars. As a first main result we can extend the composition/decomposition result for triple graph grammars to the case with NACs. This allows us to show completeness and correctness of model transformations based on rules with NACs and furthermore, we can extend the characterization of information preserving model transformations to the case with NACs. The presented results are applicable to several model transformations and in particular to the well known model transformation from class diagrams to relational data bases, which we present as running example with NACs

    Computational Analysis of Structure-Activity Relationships : From Prediction to Visualization Methods

    Get PDF
    Understanding how structural modifications affect the biological activity of small molecules is one of the central themes in medicinal chemistry. By no means is structure-activity relationship (SAR) analysis a priori dependent on computational methods. However, as molecular data sets grow in size, we quickly approach our limits to access and compare structures and associated biological properties so that computational data processing and analysis often become essential. Here, different types of approaches of varying complexity for the analysis of SAR information are presented, which can be applied in the context of screening and chemical optimization projects. The first part of this thesis is dedicated to machine-learning strategies that aim at de novo ligand prediction and the preferential detection of potent hits in virtual screening. High emphasis is put on benchmarking of different strategies and a thorough evaluation of their utility in practical applications. However, an often claimed disadvantage of these prediction methods is their "black box" character because they do not necessarily reveal which structural features are associated with biological activity. Therefore, these methods are complemented by more descriptive SAR analysis approaches showing a higher degree of interpretability. Concepts from information theory are adapted to identify activity-relevant structure-derived descriptors. Furthermore, compound data mining methods exploring prespecified properties of available bioactive compounds on a large scale are designed to systematically relate molecular transformations to activity changes. Finally, these approaches are complemented by graphical methods that primarily help to access and visualize SAR data in congeneric series of compounds and allow the formulation of intuitive SAR rules applicable to the design of new compounds. The compendium of SAR analysis tools introduced in this thesis investigates SARs from different perspectives

    Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541, arXiv:1104.556

    Preprocessing Subgraph and Minor Problems: When Does a Small Vertex Cover Help?

    Full text link
    We prove a number of results around kernelization of problems parameterized by the size of a given vertex cover of the input graph. We provide three sets of simple general conditions characterizing problems admitting kernels of polynomial size. Our characterizations not only give generic explanations for the existence of many known polynomial kernels for problems like q-Coloring, Odd Cycle Transversal, Chordal Deletion, Eta Transversal, or Long Path, parameterized by the size of a vertex cover, but also imply new polynomial kernels for problems like F-Minor-Free Deletion, which is to delete at most k vertices to obtain a graph with no minor from a fixed finite set F. While our characterization captures many interesting problems, the kernelization complexity landscape of parameterizations by vertex cover is much more involved. We demonstrate this by several results about induced subgraph and minor containment testing, which we find surprising. While it was known that testing for an induced complete subgraph has no polynomial kernel unless NP is in coNP/poly, we show that the problem of testing if a graph contains a complete graph on t vertices as a minor admits a polynomial kernel. On the other hand, it was known that testing for a path on t vertices as a minor admits a polynomial kernel, but we show that testing for containment of an induced path on t vertices is unlikely to admit a polynomial kernel.Comment: To appear in the Journal of Computer and System Science

    Methods for the Analysis of Matched Molecular Pairs and Chemical Space Representations

    Get PDF
    Compound optimization is a complex process where different properties are optimized to increase the biological activity and therapeutic effects of a molecule. Frequently, the structure of molecules is modified in order to improve their property values. Therefore, computational analysis of the effects of structure modifications on property values is of great importance for the drug discovery process. It is also essential to analyze chemical space, i.e., the set of all chemically feasible molecules, in order to find subsets of molecules that display favorable property values. This thesis aims to expand the computational repertoire to analyze the effect of structure alterations and visualize chemical space. Matched molecular pairs are defined as pairs of compounds that share a large common substructure and only differ by a small chemical transformation. They have been frequently used to study property changes caused by structure modifications. These analyses are expanded in this thesis by studying the effect of chemical transformations on the ionization state and ligand efficiency, both measures of great importance in drug design. Additionally, novel matched molecular pairs based on retrosynthetic rules are developed to increase their utility for prospective use of chemical transformations in compound optimization. Further, new methods based on matched molecular pairs are described to obtain preliminary SAR information of screening hit compounds and predict the potency change caused by a chemical transformation. Visualizations of chemical space are introduced to aid compound optimization efforts. First, principal component plots are used to rationalize a matched molecular pair based multi-objective compound optimization procedure. Then, star coordinate and parallel coordinate plots are introduced to analyze drug-like subspaces, where compounds with favorable property values can be found. Finally, a novel network-based visualization of high-dimensional property space is developed. Concluding, the applications developed in this thesis expand the methodological spectrum of computer-aided compound optimization
    • …
    corecore