2,838 research outputs found

    Exploiting parallelism in coalgebraic logic programming

    Get PDF
    We present a parallel implementation of Coalgebraic Logic Programming (CoALP) in the programming language Go. CoALP was initially introduced to reflect coalgebraic semantics of logic programming, with coalgebraic derivation algorithm featuring both corecursion and parallelism. Here, we discuss how the coalgebraic semantics influenced our parallel implementation of logic programming

    Multiparty Sessions based on Proof Nets

    Full text link
    We interpret Linear Logic Proof Nets in a term language based on Solos calculus. The system includes a synchronisation mechanism, obtained by a conservative extension of the logic, that enables to define non-deterministic behaviours and multiparty sessions.Comment: In Proceedings PLACES 2014, arXiv:1406.331

    Adapting the interior point method for the solution of linear programs on high performance computers

    Get PDF
    In this paper we describe a unified algorithmic framework for the interior point method (IPM) of solving Linear Programs (LPs) which allows us to adapt it over a range of high performance computer architectures. We set out the reasons as to why IPM makes better use of high performance computer architecture than the sparse simplex method. In the inner iteration of the IPM a search direction is computed using Newton or higher order methods. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system and the design of data structures to take advantage of coarse grain parallel and massively parallel computer architectures are considered in detail. Finally, we present experimental results of solving NETLIB test problems on examples of these architectures and put forward arguments as to why integration of the system within sparse simplex is beneficial

    Geometry of abstraction in quantum computation

    Get PDF
    Quantum algorithms are sequences of abstract operations, performed on non-existent computers. They are in obvious need of categorical semantics. We present some steps in this direction, following earlier contributions of Abramsky, Coecke and Selinger. In particular, we analyze function abstraction in quantum computation, which turns out to characterize its classical interfaces. Some quantum algorithms provide feasible solutions of important hard problems, such as factoring and discrete log (which are the building blocks of modern cryptography). It is of a great practical interest to precisely characterize the computational resources needed to execute such quantum algorithms. There are many ideas how to build a quantum computer. Can we prove some necessary conditions? Categorical semantics help with such questions. We show how to implement an important family of quantum algorithms using just abelian groups and relations.Comment: 29 pages, 42 figures; Clifford Lectures 2008 (main speaker Samson Abramsky); this version fixes a pstricks problem in a diagra

    Improving the efficiency of nondeterministic indepemndent and-parallel systems

    Get PDF
    We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking

    Building scalable software systems in the multicore era

    Get PDF
    Software systems must face two challenges today: growing complexity and increasing parallelism in the underlying computational models. The problem of increased complexity is often solved by dividing systems into modules in a way that permits analysis of these modules in isolation. The problem of lack of concurrency is often tackled by dividing system execution into tasks that permits execution of these tasks in isolation. The key challenge in software design is to manage the explicit and implicit dependence between modules that decreases modularity. The key challenge for concurrency is to manage the explicit and implicit dependence between tasks that decreases parallelism. Even though these challenges appear to be strikingly similar, current software design practices and languages do not take advantage of this similarity. The net effect is that the modularity and concurrency goals are often tackled mutually exclusively. Making progress towards one goal does not naturally contribute towards the other. My position is that for programmers that are not formally and rigorously trained in the concurrency discipline the safest and most productive way to get scalability in their software is by improving modularity of their software using programming language features and design practices that reconcile modularity and concurrency goals. I briefly discuss preliminary efforts of my group, but we have only touched the tip of the iceberg
    • ā€¦
    corecore