7 research outputs found

    Towards Automatic Learning of Heuristics for Mechanical Transformations of Procedural Code

    Get PDF
    The current trend in next-generation exascale systems goes towards integrating a wide range of specialized (co-)processors into traditional supercomputers. However, the integration of different specialized devices increases the degree of heterogeneity and the complexity in programming such type of systems. Due to the efficiency of heterogeneous systems in terms of Watt and FLOPS per surface unit, opening the access of heterogeneous platforms to a wider range of users is an important problem to be tackled. In order to bridge the gap between heterogeneous systems and programmers, in this paper we propose a machine learning-based approach to learn heuristics for defining transformation strategies of a program transformation system. Our approach proposes a novel combination of reinforcement learning and classification methods to efficiently tackle the problems inherent to this type of systems. Preliminary results demonstrate the suitability of the approach for easing the programmability of heterogeneous systems.Comment: Part of the Program Transformation for Programmability in Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March 2016, 9 pages, LaTe

    Transforming data by calculation

    Get PDF
    Thispaperaddressesthefoundationsofdata-modeltransformation.A catalog of data mappings is presented which includes abstraction and representa- tion relations and associated constraints. These are justified in an algebraic style via the pointfree-transform, a technique whereby predicates are lifted to binary relation terms (of the algebra of programming) in a two-level style encompassing both data and operations. This approach to data calculation, which also includes transformation of recursive data models into “flat” database schemes, is offered as alternative to standard database design from abstract models. The calculus is also used to establish a link between the proposed transformational style and bidi- rectional lenses developed in the context of the classical view-update problem.Fundação para a CiĂȘncia e a Tecnologia (FCT

    Program Analysis and Compilation Techniques for Speeding up Transactional Database Workloads

    Get PDF
    There is a trend towards increased specialization of data management software for performance reasons. The improved performance not only leads to a more efficient usage of the underlying hardware and cuts the operation costs of the system, but also is a game-changing competitive advantage for many emerging application domains such as high-frequency algorithmic trading, clickstream analysis, infrastructure monitoring, fraud detection, and online advertising to name a few. In this thesis, we study the automatic specialization and optimization of database application programs -- sequences of queries and updates, augmented with control flow constructs as they appear in database scripts, user-defined functions (UDFs), transactional workloads and triggers in languages such as PL/SQL. We propose to build online transaction processing (OLTP) systems around a modern compiler infrastructure. We show how to build an optimizing compiler for transaction programs using generative programming and state-of-the-art compiler technology, and present techniques for aggressive code inlining, fusion, deforestation, and data structure specialization in the domain of relational transaction programs. We also identify and explore the key optimizations that can be applied in this domain. In addition, we study the advantage of using program dependency analysis and restructuring to enable the concurrency control algorithms to achieve higher performance. Traditionally, optimistic concurrency control algorithms, such as optimistic Multi-Version Concurrency Control (MVCC), avoid blocking concurrent transactions at the cost of having a validation phase. Upon failure in the validation phase, the transaction is usually aborted and restarted from scratch. The "abort and restart" approach becomes a performance bottleneck for use cases with high contention objects or long running transactions. In addition, restarting from scratch creates a negative feedback loop in the system, because the system incurs additional overhead that may create even more conflicts. However, using the dependency information inside the transaction programs, we propose a novel transaction repair approach for in-memory databases. This low overhead approach summarizes the transaction programs in the form of a dependency graph. The dependency graph also contains the constructs used in the validation phase of the MVCC algorithm. Then, when encountering conflicts among transactions, our mechanism quickly detects the conflict locations in the program and partially re-executes the conflicting transactions. This approach maximizes the reuse of the computations done in the first execution round and increases the transaction processing throughput. We evaluate the proposed ideas and techniques in the thesis on some popular benchmarks such as TPC-C and modified versions of TPC-H and TPC-E, as well as other micro-benchmarks. We show that applying these techniques leads to 2x-100x performance improvement in many use cases. Besides, by selectively disabling some of the optimizations in the compiler, we derive a clinical and precise way of obtaining insight into their individual performance contributions

    The Design and Implementation of an Interactive Proof Editor

    Get PDF
    This thesis describes the design and implementation of the IPE, an interactive proof editor for first-order intuitionistic predicate calculus, developed at the University of Edinburgh during 1983-1986, by the author together with John Cartmell and Tatsuya Hagino. The IPE uses an attribute grammar to maintain the state of its proof tree as a context-sensitive structure. The interface allows free movement through the proof structure, and encourages a "proof-byexperimentation" approach, since no proof step is irrevocable. We describe how the IPE's proof rules can be derived from natural deduction rules for first-order intuitionistic logic, how these proof rules are encoded as an attribute grammar, and how the interface is constructed on top of the grammar. Further facilities for the manipulation of the IPE's proof structures are presented, including a notion of IPE-tactic for their automatic construction. We also describe an extension of the IPE to enable the construction and use of simply-structured collections of axioms and results, the main provision here being an interactive "theory browser" which looks for facts which match a selected problem

    Program Transformation and Compilation

    No full text
    corecore