62,615 research outputs found

    Designing a high performance parallel logic programming system

    Get PDF
    Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Performance evaluation of a storage model for OR-parallel execution of logic programs

    Get PDF
    As the next step towards a computer architecture for parallel execution of logic programs we have implemented four refinements of the basic storage model for OR-Parallelism and gathered data about their performance on two types of shared memory architectures, with and without local memories. The results show how the different properties of the implementations influence performance, and indicate that the implementations using hashing techniques (hash windows) will perform best, especially on systems with a global storage and caches. We rise the question of the usefulness of the simulation technique as a tool in developing new computer architectures. Our answer is that simulations can not give the ultimate answers to the design questions, but if only the judiciosly chosen parts of the machine are simulated on a detailed level, then the obtained results can give a very good guidance in making design choice

    The Relative Effects of Logistics, Coordination and Human Resource on Humanitarian Aid and Disaster Relief Mission Performance

    Full text link
    Most studies on humanitarian aid and disaster relief (HADR) missions suggest that the quality of logistics, coordination and human resource management will affect their performance. However, studies in developing countries are mainly conceptual and lack the necessary empirical evidence to support these contentions. The current paper thereby aimed to fill this knowledge gap by statistically examining the effects of the abovementioned factors on such missions. Focusing on the Malaysian army due to its extensive experience in HADR operations, the paper opted for a quantitative approach to allow for a more objective analysis of the issues. The results show that there are other potential determinants of mission success which deserve due attention in future studies. They also suggest that human resource is not easily measured as a construct, and that this limitation in methodology must be overcome to derive more accurate conclusions regarding its effect on HADR mission performance.&nbsp

    Terms of Reference: Genebank Platform Evaluation

    Get PDF
    Through the Genebank Platform, CGIAR genebanks managed collections of more than 20 staple crops in 12 locations on five continents. The collections remain freely available upon request to thousands of users worldwide under the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA), accounting for a large amount of the germplasm exchanged every year under the multilateral system of access and benefit sharing. CGIAR genebanks safeguard some of the largest and most widely used collections of crop diversity in the world, critical to attaining global development goals to end hunger and improve food and nutrition security. The genebanks- as a key driver of the international exchange of Plant Genetic Resources for Food and Agriculture (PGRFA)-are fundamental to delivering the CGIAR 2030 Research and Innovation Strategy

    A Rewrite Framework for Language Definitions and for Generation of Efficient Interpreters

    Get PDF
    A rewrite logic semantic definitional framework for programming languages is introduced, called K, together with partially automated translations of K language definitions into rewriting logic and into C. The framework is exemplified by defining SILF, a simple imperative language with functions. The translation of K definitions into rewriting logic enables the use of the various analysis tools developed for rewrite logic specifications, while the translation into C allows for very efficient interpreters. A suite of tests show the performance of interpreters compiled from K definitions

    DualTable: A Hybrid Storage Model for Update Optimization in Hive

    Full text link
    Hive is the most mature and prevalent data warehouse tool providing SQL-like interface in the Hadoop ecosystem. It is successfully used in many Internet companies and shows its value for big data processing in traditional industries. However, enterprise big data processing systems as in Smart Grid applications usually require complicated business logics and involve many data manipulation operations like updates and deletes. Hive cannot offer sufficient support for these while preserving high query performance. Hive using the Hadoop Distributed File System (HDFS) for storage cannot implement data manipulation efficiently and Hive on HBase suffers from poor query performance even though it can support faster data manipulation.There is a project based on Hive issue Hive-5317 to support update operations, but it has not been finished in Hive's latest version. Since this ACID compliant extension adopts same data storage format on HDFS, the update performance problem is not solved. In this paper, we propose a hybrid storage model called DualTable, which combines the efficient streaming reads of HDFS and the random write capability of HBase. Hive on DualTable provides better data manipulation support and preserves query performance at the same time. Experiments on a TPC-H data set and on a real smart grid data set show that Hive on DualTable is up to 10 times faster than Hive when executing update and delete operations.Comment: accepted by industry session of ICDE201

    Region-based memory management for Mercury programs

    Full text link
    Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24%, and an average reduction in memory requirements of 95%. In fact, our system achieves optimal memory consumption in some programs.Comment: 74 pages, 23 figures, 11 tables. A shorter version of this paper, without proofs, is to appear in the journal Theory and Practice of Logic Programming (TPLP

    Specification-enhanced execution

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 54-57).Our goal is to provide a framework that allows the programmer to easily shift responsibility for certain aspects of the program execution to the runtime system. We present specification- enhanced execution, a programming and execution model that allows the programmer to describe certain aspects of program execution using high level specifications that the runtime is responsible for executing. With our approach, the programmer provides an implementation that covers certain aspects of program behavior and a set of specifications that cover other aspects of program behavior. We propose a runtime system that uses concolic (combined concrete and symbolic) execution to simultaneously execute all aspects of the program. We describe LogLog, a language we have designed for using this programming and runtime model. We present a case study applying this programing model to real-word data processing programs and demonstrate the feasibility of both the programming and runtime models.by Jean Yang.S.M
    corecore