18 research outputs found

    Extending DQL with Recursive Facilities

    Get PDF
    Object Relational Mappings reduce a gap between Relational Databases and programming languages. However, only the simplest operations are covered by the ORM frameworks. Most facilities provided by DBMSs are not usable via ORM. Among such features are recursive queries, introduced in SQL:99 standard. This paper presents integration of Recursive Common Table Expressions with Doctrine Query Language - a part of Doctrine ORM framework for PHP

    Model query transformation framework- MQT: from EMF-based model query languages to persistence-spefic query languages

    Get PDF
    Memory problems of XML Metadata Interchange (XMI) (default persistence in Eclipse Modelling Framework (EMF)) when operating large models, have motivated the appearance of alternative mechanisms for persistence of EMF models. Most recent approaches propose using database back-ends. These approaches provide support for querying models using EMF-based model query languages (Plain EMF, Object Constraint Language (OCL), EMF Query, Epsilon Object Language (EOL), etc.). However, these languages commonly require loading in-memory all the model elements that are involved in the query. In the case of queries that traverse models (most commonly used type of queries) they require to load entire model in-memory. This loading strategy causes memory problems when operated models are large. Most database back-ends provide database-specific query languages that leverage capabilities of the database engine (better performance) and without requiring in-memory load of models for query execution (lower memory footprint). For example, Structured Query Language (SQL) is a query language for relational databases and Cypher is for Neo4J databases. In this dissertation we present MQT-Engine, a framework that supports execution of model query languages but with the e ciency (in terms of memory and performance) of a database-specifoc query language. To achieve this, MQT-Engine provides a two-step query transformation mechanism: forst, queries expressed with a model query language are transformed into a Query Language Independent Model (QLI Model); and then QLI Model is transformed into a database-specifoc query that is executed directly over the database. This mechanism provides extensibility and reusability to the framework, since it facilitates the inclusion of new query languages at both sides of the transformation. A prototype of the framework is provided. It supports transformation of EOL queries into SQL queries that are executed directly over a relational Connected Data Objects (CDO) repository. The prototype has been evaluated with two experimental evaluations. First evaluation is based on the reverse engineering domain. It compares time and memory usage required by MQT-Engine and other query languages (EMF API, OCL and SQL) to execute a set of queries over models persisted with CDO. Second evaluation is based on the railway domain, and compares performance results of MQT-Engine and other query languages (EMF API, OCL, IncQuery, SQL, etc.) for executing a set of queries. Obtained results show that MQT-Engine is able to execute successfully all the evaluated experiments. MQT-Engine is one of the evaluated solutions showing best performance results for first execution of model queries. In the case of query languages executed over CDO repositories, it is the faster solution and the one requiring less memory. For example, for the largest model in the reverse engineering case it is up to 162 times faster than a model query language executed at client-side, and it requires 23 times less memory. Additionally, the query transformation overload is constant and small (less than 2 seconds). These results validate the main goal of this dissertation: to provide a framework that gives to the model engineers the ability for specifying queries in a model query language, and then execute them with a performance and memory footprint similar to that of a persistence-specific query language. However, the framework has a set of limitations: the approach should be optimized when queries are subsequently executed; it only supports nonmodification model traversal queries; and the prototype is specific for EOL queries over CDO repositories with DBStore. Therefore, it is planned to extend the framework and address these limitations in a future version.Los problemas de memoria de XMI (mecanismo de persistencia por defecto en EMF) cuando se trabaja con modelos grandes, han motivado la aparición de mecanismos de persistencia alternativos para los modelos EMF. Los enfoques más recientes proponen el uso de bases de datos para la persistencia de los modelos. La mayoría de estos enfoques soportan la ejecución de operaciones usando lenguajes de consulta de modelos basados en EMF (EMF API, OCL, EMF Query, EOL, etc.). Sin embargo, este tipo de lenguajes necesitan almacenar en memoria al menos todos los elementos implicados en la consulta (todos los elementos del modelo en las consultas que recorren completamente el modelo consultado). Esta estrategia de carga de la información para hacer las consultas provoca problemas de memoria cuando los modelos son de gran tamaño. La mayoría de las bases de datos tienen lenguajes específicos que aprovechan las capacidades del motor de la base de datos (mayor rapidez) y sin la necesidad de cargar en memoria los modelos (menor uso de memoria). Por ejemplo, SQL es el lenguaje específico para las bases de datos relacionales y Cypher para las bases de datos Neo4J. Este trabajo propone MQT-Engine, un framework que permite ejecutar lenguajes de consulta para modelos con tiempos de ejecución y uso de memoria similares al de un lenguaje específico de base de datos. MQT-Engine realiza una transformación en dos pasos de las consultas: primero transforma las consultas que han sido escritas con un lenguaje de consulta para modelos en un modelo que es independiente del lenguaje (QLI Model); después, el modelo generado se transforma en una consulta equivalente, pero escrita con un lenguaje específico de base de datos. La transformación en dos pasos proporciona extensibilidad y reusabilidad ya que facilita la inclusión de nuevos lenguajes. Se ha implementado un prototipo de MQT-Engine que transforma consultas EOL en SQL y las ejecuta directamente sobre un repositorio CDO. El prototipo se ha evaluado con dos casos de uso. El primero está basado en el dominio de la ingeniería inversa. Se han comparado los tiempos de ejecución y el uso de memoria que necesitan MQT-Engine y otros lenguajes de consulta (EMF API, OCL y SQL) para ejecutar una serie de consultas sobre modelos persistidos en CDO. El segundo caso de uso está basado en el dominio de los ferrocarriles y compara los tiempos de ejecución que necesitan MQT-Engine y otros lenguajes (EMF API, OCL, IncQuery, etc.) para ejecutar varias consultas. Los resultados obtenidos muestran que MQT-Engine es capaz de ejecutar correctamente todos los experimentos y además es una de las soluciones con mejores tiempos para la primera ejecución de las consultas de modelos. MQTEngine es la opción más rápida y que necesita menos memoria entre los lenguajes ejecutados sobre repositorios CDO. Por ejemplo, en el caso del modelo más grande de ingeniería inversa, MQT-Engine es 162 veces más rápido y necesita 23 veces menos memoria que los lenguajes de consulta de modelos ejecutados al lado del cliente. Además, la sobrecarga de la transformación es pequeña y constante (menos de 2 segundos). Estos resultados prueban el objetivo principal de esta tesis: proporcionar un framework que permite a los ingenieros de modelos definir las consultas con un lenguaje de consulta de modelos y además ejecutarlas con una con tiempos de ejecución y uso de memoria similares a los de un lenguaje específico de bases de datos. Sin embargo, la solución tiene una serie de limitaciones: solo soporta consultas que recorren el modelo completamente y sin modificarlo; el prototipo es específico para consultas en EOL y sobre repositorios CDO (relacionales); y habría que optimizar la ejecución de las consultas cuando estas se ejecutan más de una vez. Se ha planeado resolver estas limitaciones en versiones futuras del trabajo

    Scalable Automated Incrementalization for Real-Time Static Analyses

    Get PDF
    This thesis proposes a framework for easy development of static analyses, whose results are incrementalized to provide instantaneous feedback in an integrated development environment (IDE). Today, IDEs feature many tools that have static analyses as their foundation to assess software quality and catch correctness problems. Yet, these tools often fail to provide instantaneous feedback and are thus restricted to nightly build processes. This precludes developers from fixing issues at their inception time, i.e., when the problem and the developed solution are both still fresh in mind. In order to provide instantaneous feedback, incrementalization is a well-known technique that utilizes the fact that developers make only small changes to the code and, hence, analysis results can be re-computed fast based on these changes. Yet, incrementalization requires carefully crafted static analyses. Thus, a manual approach to incrementalization is unattractive. Automated incrementalization can alleviate these problems and allows analyses writers to formulate their analyses as queries with the full data set in mind, without worrying over the semantics of incremental changes. Existing approaches to automated incrementalization utilize standard technologies, such as deductive databases, that provide declarative query languages, yet also require to materialize the full dataset in main-memory, i.e., the memory is permanently blocked by the data required for the analyses. Other standard technologies such as relational databases offer better scalability due to persistence, yet require large transaction times for data. Both technologies are not a perfect match for integrating static analyses into an IDE, since the underlying data, i.e., the code base, is already persisted and managed by the IDE. Hence, transitioning the data into a database is redundant work. In this thesis a novel approach is proposed that provides a declarative query language and automated incrementalization, yet retains in memory only a necessary minimum of data, i.e., only the data that is required for the incrementalization. The approach allows to declare static analyses as incrementally maintained views, where the underlying formalism for incrementalization is the relational algebra with extensions for object-orientation and recursion. The algebra allows to deduce which data is the necessary minimum for incremental maintenance and indeed shows that many views are self-maintainable, i.e., do not require to materialize memory at all. In addition an optimization for the algebra is proposed that allows to widen the range of self-maintainable views, based on domain knowledge of the underlying data. The optimization works similar to declaring primary keys for databases, i.e., the optimization is declared on the schema of the data, and defines which data is incrementally maintained in the same scope. The scope makes all analyses (views) that correlate only data within the boundaries of the scope self-maintainable. The approach is implemented as an embedded domain specific language in a general-purpose programming language. The implementation can be understood as a database-like engine with an SQL-style query language and the execution semantics of the relational algebra. As such the system is a general purpose database-like query engine and can be used to incrementalize other domains than static analyses. To evaluate the approach a large variety of static analyses were sampled from real-world tools and formulated as incrementally maintained views in the implemented engine

    Late-bound code generation

    Get PDF
    Each time a function or method is invoked during the execution of a program, a stream of instructions is issued to some underlying hardware platform. But exactly what underlying hardware, and which instructions, is usually left implicit. However in certain situations it becomes important to control these decisions. For example, particular problems can only be solved in real-time when scheduled on specialised accelerators, such as graphics coprocessors or computing clusters. We introduce a novel operator for hygienically reifying the behaviour of a runtime function instance as a syntactic fragment, in a language which may in general differ from the source function definition. Translation and optimisation are performed by recursively invoked, dynamically dispatched code generators. Side-effecting operations are permitted, and their ordering is preserved. We compare our operator with other techniques for pragmatic control, observing that: the use of our operator supports lifting arbitrary mutable objects, and neither requires rewriting sections of the source program in a multi-level language, nor interferes with the interface to individual software components. Due to its lack of interference at the abstraction level at which software is composed, we believe that our approach poses a significantly lower barrier to practical adoption than current methods. The practical efficacy of our operator is demonstrated by using it to offload the user interface rendering of a smartphone application to an FPGA coprocessor, including both statically and procedurally defined user interface components. The generated pipeline is an application-specific, statically scheduled processor-per-primitive rendering pipeline, suitable for place-and-route style optimisation. To demonstrate the compatibility of our operator with existing languages, we show how it may be defined within the Python programming language. We introduce a transformation for weakening mutable to immutable named bindings, termed let-weakening, to solve the problem of propagating information pertaining to named variables between modular code generating units.Open Acces

    Native Language OLAP Query Execution

    Get PDF
    Online Analytical Processing (OLAP) applications are widely used in the components of contemporary Decision Support systems. However, existing OLAP query languages are neither efficient nor intuitive for developers. In particular, Microsoft’s Multidimensional Expressions language (MDX), the de-facto standard for OLAP, is essentially a string-based extension to SQL that hinders code refactoring, limits compile-time checking, and provides no object-oriented functionality whatsoever. In this thesis, we present Native language OLAP query eXecution, or NOX, a framework that provides responsive and intuitive query facilities. To this end, we exploit the underlying OLAP conceptual data model and provide a clean integration between the server and the client language. NOX queries are object-oriented and support inheritance, refactoring and compile-time checking. Underlying this functionality is a domain specific algebra and language grammar that are used to transparently convert client side queries written in the native development language into algebraic operations understood by the server. In our prototype of NOX, JAVA is used as the native language. We provide client side libraries that define an API for programmers to use for writing OLAP queries. We investigate the design of NOX through a series of real world query examples. Specifically, we explore the following: fundamental SELECTION and PROJECTION, set operations, hierarchies, parametrization and query inheritance. We compare NOX queries to MDX and show the intuitiveness and robustness of NOX. We also investigate NOX expressiveness with respect to MDX from an algebraic point of view by demonstrating the correspondence of the two approaches in terms of SELECTION and PROJECTION operations. We believe the practical benefit of NOX-style query processing is significant. In short, it largely reduces OLAP database access to the manipulation of client side, in-memory data object

    Tenth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools Aarhus, Denmark, October 19-21, 2009

    Get PDF
    This booklet contains the proceedings of the Tenth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 19-21, 2009. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.cs.au.dk/CPnets/events/workshop0

    Assessment practices and their impact on home economics education in Ireland

    Get PDF
    This study was prompted by an interest in the extent to which the aims of home economics education in Ireland are being served by the assessment carried out at a national level. This interest led to an empirical investigation of key stakeholders’ perceptions of the validity of home economics assessment and a critical evaluation of its impact on teaching and learning. The data collection primarily comprised interviews with a selection of teachers and other key people such as students, teacher educators and professional home economists; and a complementary analysis of curriculum and design of Junior and Leaving Certificate home economics assessments during the period 2005-2014. The analysis of interview data combined with the curriculum and assessment analyses revealed the compounding impact and washback effect of home economics assessments on student learning experience and outcomes. This impact was reflected in several areas of the findings including an evident satisfaction among the respondents with junior cycle assessment, due to the perceived appropriateness of the assessment design and operational arrangements, and dissatisfaction with curriculum and assessment arrangements at senior cycle as they were considered to be inappropriate and negatively impacting on the quality of learning achieved. The respondents candidly pointed to what they considered to be an acceptance by some teachers of unethical behaviour around the completion of journal tasks. The respondents indicated that summative assessment practices are commonly used in home economics classrooms and the findings strongly suggest that external examinations are influencing teaching methods by demanding a test-oriented pedagogy to enable students to achieve certificate points. The technical analysis of the Junior and Leaving Certificate examination questions confirmed that these external assessments predominantly promote lower-order learning and there are clear indications of a washback effect on the quality of learning achieved. There is a view that the subject's position in the curriculum is weakened due to a lack of coherence around practice, as well as a lack of advocacy and leadership in the field. There was little evidence of the impact of home economics education and many of the interviewees merely 'hoped' that home economics made a difference in the lives of students. The study also showed that there are profiling, identity and teacher agency issues impacting upon the home economics profession. While not immediately generalisable to all home economics teachers or settings in schools, this study nonetheless implies that if the views and practices of the respondents were to be replicated across the whole of the home economics education community, it would not be safe to view national assessment results as a valid indicator of learning and achievement standards in the subject. There are grounds in this work to argue that the subject's values and purposes are not supported by existing curriculum, pedagogy and assessment arrangements

    Abstraction over non-local object information in aspect-oriented programming using path expression pointcuts

    Get PDF
    Aspect-oriented software development (AOSD) consists of a number of technologies that promise a better level of modularization of concerns that cannot be separated in individual modules by using conventional techniques. Aspect-oriented programming (AOP) is one of these technologies. It allows the modularization at the level of software application code. It provides programmers with means to quantify over specific points in the base application code, called join points, at which the crosscutting concern code must be triggered. The quantification is achieved by special selection constructs called pointcuts, while the triggered code that is responsible for adapting the selected join point is provided by special construct called advice. The selection and adaptation mechanisms in aspect-oriented programming depend heavily on the distinguishing properties of the join points. These properties can either be derived from the local execution context at the join point or they are considered to be non-local to the join point. Aspect-oriented systems provide a plenty of pointcut constructs that support accessing the local join point properties, while they rarely support the non-local properties. A large research effort has been achieved to extend current aspectoriented systems in order to solve the problem of non-locality. However, none of these proposals support the non-local object relationships. There are many situations where a good abstraction over nonlocal object information is needed, otherwise, the developers will be obliged to provide complex and error-prone workarounds inside advice body that conceptually do not reflect the semantics of join point selection and mix it with the semantics of join point daptation. Such recurrent situations occur when trying to modularize the object persistence concern. Object persistence, the process of storing and retrieving objects to and from the datastore, is a classical example of crosscutting concern. Orthogonal object persistence meets the obliviousness property of AOP: The base code should not be prepared upfront for persistence. This thesis addresses the shortcomings in current aspect-oriented persistence systems. It shows that the reason for such shortcomings is due to the lack of supporting non-local object information by the used aspect-oriented languages. To overcome this problem, this thesis proposes a new extension to the current pointcut languages called path expression pointcuts that operate on object graphs and make relevant object information available to the aspects. As an explicit and complete construct, a formal semantics and type system have provided. Moreover, an implementation of path expression pointcuts is discussed in the thesis along with its usage to show how the aforementioned problems are resolved
    corecore