14 research outputs found

    Incremental pattern matching for regular expressions

    Get PDF
    Graph pattern matching lies at the heart of any graph transformation-based system. Incremental pattern matching is one approach proposed for reducingthe overall cost of pattern matching over successive transformations by preserving the matches that stay relevant after a rule application. An important issue in any matching scheme, is the ability to properly and consistently deal with various facilities that add to the expressiveness of a GT-tool’s rule language. One such feature is the support for regular path expressions, which would let two nodes to be consideredas a “match”, if a certain path of edges exists between them. In this paper, the incorporation of regular expression support into incremental pattern matching is discussed within the context of the GROOVE tool set. This includes laying down a formal foundation for incremental pattern matching for regular expressions which is then used to justify the extension proposed to add regular expression support to a well-known pattern matching algorithm

    Reusable Event Types for Models at Runtime to Support the Examination of Runtime Phenomena

    Get PDF
    Abstract-Today's software is getting more and more complex and harder to understand. Models help to organize knowledge and emphasize the structure of a software at a higher abstraction level. While the usage of model-driven techniques is widely adopted during software construction, it is still an open research topic if models can also be used to make runtime phenomena more comprehensible as well. It is not obvious which models are suitable for manual analysis and which model elements can be related to what type of runtime events. This paper proposes a collection of runtime event types that can be reused for various systems and meta-models. Based on these event types, information can be derived which help human observers to assess the current system state. Our approach is applied in a case study and evaluated regarding generalisability and completeness by relating it to two different meta-models

    A generic approach to model generation operations

    Get PDF
    Model generation operations are important artifacts in MDE applications. These approaches can be used for model verification, model finding, and others. In many scenarios, model transformations can as well be represented by a model generation operation. This often comes with the advantage of being bidi- rectional and supporting increments. However, most part of model generation approaches do not target several operation kinds, but narrower scenarios by mapping the generation problem into solver specific problems. They are efficient, but often don’t have a supporting framework. In this paper, we present an approach and framework that allows to specify and to execute model operations that can be represented in terms of model generation operations. We first introduce a model search layer that can be used with different solvers. We illustrate this layer with a driving example implemented using Alloy/SAT solver. On top of this, we introduce a transformation layer, which specification are translated into the model search layer, independently from any solver. The solution is natively bidirectional, incremental and it is not restricted to one-and-one scenarios. The approach is illustrated by two use cases and with 3 different scenarios, backed by a full, extensible and free implementation

    Estado del arte de verificación de transformación de modelos

    Get PDF
    El Desarrollo de Software Guiado por Modelos (Model-Driven Development, MDD) es un enfoque de ingeniería de software basado en el modelado de un sistema como la principal actividad del desarrollo y la construcción del mismo guiada por transformaciones de dichos modelos. Su éxito depende fuertemente de la disponibilidad de lenguajes y herramientas apropiados para realizar las transformaciones y validar su corrección. En relación a este último punto, este documento presenta un relevamiento del estado del arte de los diferentes enfoques y técnicas de verificación de transformaciones de modelos empleados para MDD. Se analizan las principales características de los enfoques existentes, a saber: basado en casos de prueba, model checking y métodos deductivos. Así mismo se estudian las diferentes técnicas existentes para cada enfoque y se presentan las herramientas utilizadas en la bibliografía, ejemplificando su uso

    Formal Verification Techniques for Model Transformations: A Tridimensional Classification .

    Full text link

    Processing Structured Data Streams

    Get PDF
    We elaborate this study in order to choose the most suitable technology to develop our proposal. Second, we propose three methods to reduce the set of data to be processed by a query when working with large graphs, namely spatial, temporal and random approximations. These methods are based on Approximate Query Processing techniques and consist in discarding the information that is considered not relevant for the query. The reduction of the data is performed online with the processing and considers both spatial and temporal aspects of the data. Since discarding information in the source data may decrease the validity of the results, we also define the transformation error obtain with these methods in terms of accuracy, precision and recall. Finally, we present a preprocessing algorithm, called SDR algorithm, that is also used to reduce the set of data to be processed, but without compromising the accuracy of the results. It calculates a subgraph from the source graph that contains only the relevant information for a given query. Since this technique is a preprocessing algorithm it is run offline before the actual processing begins. In addition, an incremental version of the algorithm is developed in order to update the subgraph as new information arrives to the system.A large amount of data is daily generated from different sources such as social networks, recommendation systems or geolocation systems. Moreover, this information tends to grow exponentially every year. Companies have discovered that the processing of these data may be important in order to obtain useful conclusions that serve for decision-making or the detection and resolution of problems in a more efficient way, for instance, through the study of trends, habits or customs of the population. The information provided by these sources typically consists of a non-structured and continuous data flow, where the relations among data elements conform graph structures. Inevitably, the processing performance of this information progressively decreases as the size of the data increases. For this reason, non-structured information is usually handled taking into account only the most recent data and discarding the rest, since they are considered not relevant when drawing conclusions. However, this approach is not enough in the case of sources that provide graph-structured data, since it is necessary to consider spatial features as well as temporal features. These spatial features refer to the relationships among the data elements. For example, some cases where it is important to consider spatial aspects are marketing techniques, which require information on the location of users and their possible needs, or the detection of diseases, that use data about genetic relationships among subjects or the geographic scope. It is worth highlighting three main contributions from this dissertation. First, we provide a comparative study of seven of the most common processing platforms to work with huge graphs and the languages that are used to query them. This study measures the performance of the queries in terms of execution time, and the syntax complexity of the languages according to three parameters: number of characters, number of operators and number of internal variables

    Implicit Incremental Model Analyses and Transformations

    Get PDF
    When models of a system change, analyses based on them have to be reevaluated in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. This thesis proposes multiple, combinable approaches and a new formalism based on category theory for implicitly incremental model analyses and transformations. The advantages of the implementation are validated using seven case studies, partially drawn from the Transformation Tool Contest (TTC)
    corecore