400 research outputs found

    Characterizing Van Kampen Squares via Descent Data

    Full text link
    Categories in which cocones satisfy certain exactness conditions w.r.t. pullbacks are subject to current research activities in theoretical computer science. Usually, exactness is expressed in terms of properties of the pullback functor associated with the cocone. Even in the case of non-exactness, researchers in model semantics and rewriting theory inquire an elementary characterization of the image of this functor. In this paper we will investigate this question in the special case where the cocone is a cospan, i.e. part of a Van Kampen square. The use of Descent Data as the dominant categorical tool yields two main results: A simple condition which characterizes the reachable part of the above mentioned functor in terms of liftings of involved equivalence relations and (as a consequence) a necessary and sufficient condition for a pushout to be a Van Kampen square formulated in a purely algebraic manner.Comment: In Proceedings ACCAT 2012, arXiv:1208.430

    Hierarchical Graph Transformation

    Get PDF
    If systems are specified by graph transformation, large graphs should be structured in order to be comprehensible. In this paper, we present an approach for the rule-based transformation of hierarchically structured (hyper)graphs. In these graphs, distinguished hyperedges contain graphs that can be hierarchical again. Our framework extends the well-known double-pushout approach from at to hierarchical graphs. In particular, we show how pushouts and pushout complements of hierarchical graphs and graph morphisms can be constructed recursively. Moreover, we make rules more expressive by introducing variables which allow to copy and to remove hierarchical subgraphs in a single rule application

    Model driven formal development of digital libraries

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-68262-2_13Revised Selected Papers of the Third International Conference, WEBIST 2007, Barcelona, Spain, March 3-6, 2007This paper shows our model-driven approach for the formal construction and validation of Digital Libraries (DLs). We have defined a Domain Specific Visual Language (DSVL) called VisMODLE, which allows the description of a DL using five different viewpoints: services, behaviour, collections, structure and society. From a meta-model based description of the different viewpoints, we have generated a modelling environment for VisMODLE. We have provided the environment with a code generator that produces XUL code for the DL’s user interface and composes the application using predefined components that implement the different services. Moreover, we have also added validation and simulation capabilities to the environment. Using the behavioural models (state-machine based), we can visually animate the system. In addition, the combined behaviour of actors and services can be transformed into a Petri net for further analysis.Work sponsored by projects MODUWEB (TIN2006-09678) and MOSAIC (TIC2005-08225-C07-06) of the Spanish Ministry of Science and Educatio

    Enhanced Graph Rewriting Systems for Complex Software Domain

    Get PDF
    International audienceMethodologies for correct by construction reconfigurations can efficiently solve consistency issues in dynamic software architecture. Graph-based models are appropriate for designing such architectures and methods. At the same time, they may be unfit to characterize a system from a non functional perspective. This stems from efficiency and applicability limitations in handling time-varying characteristics and their related dependencies. In order to lift these restrictions, an extension to graph rewriting systems is proposed herein. The suitability of this approach, as well as the restraints of currently available ones, are illustrated, analysed and experimentally evaluated with reference to a concrete example. This investigation demonstrates that the conceived solution can: (i) express any kind of algebraic dependencies between evolving requirements and properties; (ii) significantly ameliorate the efficiency and scalability of system modifications with respect to classic methodologies; (iii) provide an efficient access to attribute values; (iv) be fruitfully exploited in software management systems; (v) guarantee theoretical properties of a grammar, like its termination

    Optimised laser microdissection of the human ocular surface epithelial regions for microarray studies

    Get PDF
    Background The most important challenge of performing insitu transcriptional profiling of the human ocular surface epithelial regions is obtaining samples in sufficient amounts, without contamination from adjacent tissue, as the region of interest is microscopic and closely apposed to other tissues regions. We have effectively collected ocular surface (OS) epithelial tissue samples from the Limbal Epithelial Crypt (LEC), limbus, cornea and conjunctiva of post-mortem cadaver eyes with laser microdissection (LMD) technique for gene expression studies with spotted oligonucleotide microarrays and Gene 1.0 ST arrays. Methods Human donor eyes (4 pairs for spotted oligonucleotide microarrays, 3 pairs for Gene 1.0 ST arrays) consented for research were included in this study with due ethical approval of the Nottingham Research Ethics Committee. Eye retrieval was performed within 36 hours of post-mortem period. The dissected corneoscleral buttons were immersed in OCT media and frozen in liquid nitrogen and stored at −80°C till further use. Microscopic tissue sections of interest were taken on PALM slides and stained with Toluidine Blue for laser microdissection with PALM microbeam systems. Optimisation of the laser microdissection technique was crucial for efficient and cost effective sample collection. Results The starting concentration of RNA as stipulated by the protocol of microarray platforms was taken as the cut-off concentration of RNA samples in our studies. The area of LMD tissue processed for spotted oligonucleotide microarray study ranged from 86,253 μm2 in LEC to 392,887 μm2 in LEC stroma. The RNA concentration of the LMD samples ranged from 22 to 92 pg/μl. The recommended starting concentration of the RNA samples used for Gene 1.0 ST arrays was 6 ng/5 μl. To achieve the desired RNA concentration the area of ocular surface epithelial tissue sample processed for the Gene 1.0 ST array experiments was approximately 100,0000 μm2 to 130,0000 μm2. RNA concentration of these samples ranged from 10.88 ng/12 μl to 25.8 ng/12 μl, with the RNA integrity numbers (RIN) for these samples from 3.3 to 7.9. RNA samples with RIN values below 2, that had failed to amplify satisfactorily were discarded. Conclusions The optimised protocol for sample collection and laser microdissection improved the RNA yield of the insitu ocular surface epithelial regions for effective microarray studies on spotted oligonucleotide and affymetrix platforms

    Triple Graph Grammars in the Large for Translating Satellite Procedures

    Get PDF
    Software translation is a challenging task. Several requirements are important – including automation of the execution, maintainability of the translation patterns, and, most importantly, reliability concerning the correctness of the translation. Triple graph grammars (TGGs) have shown to be an intuitive, well-defined technique for model translation. In this paper, we leverage TGGs for industry scale software translations. The approach is implemented using the Eclipse-based graph transformation tool Henshin and has been successfully applied in a large industrial project with the satellite operator SES on the translation of satellite control procedures. We evaluate the approach regarding requirements from the project and performance on a complete set of procedures of one satellite

    Solving Constraints in Model Transformations

    Full text link
    Constraint programming holds many promises for model driven software development (MDSD). Up to now, constraints have only started to appear in MDSD modeling languages, but have not been properly reflected in model transformation. This paper introduces constraint programming in model transformation, shows how constraint programming integrates with QVT Relations - as a pathway to wide spread use of our approach - and describes the corresponding model transformation engine. In particular, the paper will illustrate the use of constraint programming for the specification of attribute values in target models, and provide a qualitative evaluation of the benefit drawn from constraints integrated with QVT Relations

    Analysis and evaluation of conformance preserving graph transformation rules

    Get PDF
    Author's accepted manuscript (post-print).Available: 2020-02-01.Model transformation is a formal approach for modelling the behavior of software systems. Over the past few years, graph based modeling of software systems has gained significant attention as there are numerous techniques available to formally specify constraints and the dynamics of systems. Graph transformation rules are used to model the behavior of software systems which is the core element in model driven software engineering. However, in general, the application of graph transformation rules cannot guarantee the correctness of model transformations. In this paper, we propose to use a graph transformation technique that guarantees the correctness of transformations by checking required and forbidden graph patterns. The proposed technique is based on the application of conformance preserving transformation rules which guarantee that produced output models conform to their underlying metamodel. To determine if a rule is conformance preserving we present a new algorithm for checking conformance preserving rules with respect to a set of graph constraints. We also present a formal proof of the soundness of the algorithm. We apply our technique to homogeneous model transformations where input and output models must conform to the same meta-model. The algorithm relies on locality of a constrained graph to reduce the computational cost.acceptedVersio

    On Pushouts of Partial Maps

    Get PDF
    corecore