1,007 research outputs found

    Memory performance of and-parallel prolog on shared-memory architectures

    Get PDF
    The goal of the RAP-WAM AND-parallel Prolog abstract architecture is to provide inference speeds significantly beyond those of sequential systems, while supporting Prolog semantics and preserving sequential performance and storage efficiency. This paper presents simulation results supporting these claims with special emphasis on memory performance on a two-level sharedmemory multiprocessor organization. Several solutions to the cache coherency problem are analyzed. It is shown that RAP-WAM offers good locality and storage efficiency and that it can effectively take advantage of broadcast caches. It is argued that speeds in excess of 2 ML IPS on real applications exhibiting medium parallelism can be attained with current technology

    Applying Prolog to Develop Distributed Systems

    Get PDF
    Development of distributed systems is a difficult task. Declarative programming techniques hold a promising potential for effectively supporting programmer in this challenge. While Datalog-based languages have been actively explored for programming distributed systems, Prolog received relatively little attention in this application area so far. In this paper we present a Prolog-based programming system, called DAHL, for the declarative development of distributed systems. DAHL extends Prolog with an event-driven control mechanism and built-in networking procedures. Our experimental evaluation using a distributed hash-table data structure, a protocol for achieving Byzantine fault tolerance, and a distributed software model checker - all implemented in DAHL - indicates the viability of the approach

    Programming Languages for Distributed Computing Systems

    Get PDF
    When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper. We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages

    Automatic goal distribution strategies for the execution of committed choice logic languages on distributed memory parallel computers

    Get PDF
    There has been much research interest in efficient implementations of the Committed Choice Non-Deterministic (CCND) logic languages on parallel computers. To take full advantage of the speed gains of parallel computers, methods need to be found to automatically distribute goals over the machine processors, ideally with as little involvement from the user as possible.In this thesis we explore some automatic goal distribution strategies for the execu¬ tion of the CCND languages on commercially available distributed memory parallel computers.There are two facets to the goal distribution strategies we have chosen to explore:DEMAND DRIVEN: An idle processor requests work from other processors. We describe two strategies in this class: one in which an idle processor asks only neighbouring processors for spare work, the nearest-neighbour strategy; and one where an idle processor may ask any other processor in the machine for spare work, the allprocessors strategy.WEIGHTS: Using a program analysis technique devised by Tick, weights are attached to goals; the weights can be used to order the goals so that they can be executed and distributed out in weighted order, possibly increasing performance.We describe a framework in which to implement and analyse goal distribution strategies, and then go on to describe experiments with demand driven strategies, both with and without weights. The experiments were made using two of our own implementations of Flat Guarded Horn Clauses — an interpreter and a WAM-like system — executing on a MEIKO T800 Transputer Array configured in a 2-D mesh topology.Analysis of the results show that the all-processors strategies are promising (AP-NW), adding weights had little positive effect on performance, and that nearest-neighbours strategies can reduce performance due to bad load balancing.We also describe some preliminary experiments for a variant of the AP-NW strategy: goals which suspend on one variable are sent to the processor that controls that variable, the processes-to-data strategy. And we briefly look at some preliminary results of executing programs on large numbers of processors (> 30)

    Logic-Based Specification Languages for Intelligent Software Agents

    Full text link
    The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.Comment: 67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chie

    Detection of app collusion potential using logic programming

    Get PDF
    Mobile devices pose a particular security risk because they hold personal details (accounts, locations, contacts, photos) and have capabilities potentially exploitable for eavesdropping (cameras/microphone, wireless connections). The Android operating system is designed with a number of built-in security features such as application sandboxing and permission-based access control. Unfortunately, these restrictions can be bypassed, without the user noticing, by colluding apps whose combined permissions allow them to carry out attacks that neither app is able to execute by itself. While the possibility of app collusion was first warned in 2011, it has been unclear if collusion is used by malware in the wild due to a lack of suitable detection methods and tools. This paper describes how we found the first collusion in the wild. We also present a strategy for detecting collusions and its implementation in Prolog that allowed us to make this discovery. Our detection strategy is grounded in concise definitions of collusion and the concept of ASR (Access-Send-Receive) signatures. The methodology is supported by statistical evidence. Our approach scales and is applicable to inclusion into professional malware detection systems: we applied it to a set of more than 50,000 apps collected in the wild. Code samples of our tool as well as of the detected malware are available

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited

    ACE: And/or-parallel copying-based execution of logic programs

    Get PDF
    In this paper we present a novel execution model for parallel implementation of logic programs which is capable of exploiting both independent and-parallelism and or-parallelism in an efficient way. This model extends the stack copying approach, which has been successfully applied in the Muse system to implement or-parallelism, by integrating it with proven techniques used to support independent and-parallelism. We show how all solutions to non-deterministic andparallel goals are found without repetitions. This is done through recomputation as in Prolog (and in various and-parallel systems, like &-Prolog and DDAS), i.e., solutions of and-parallel goals are not shared. We propose a scheme for the efficient management of the address space in a way that is compatible with the apparently incompatible requirements of both and- and or-parallelism. We also show how the full Prolog language, with all its extra-logical features, can be supported in our and-or parallel system so that its sequential semantics is preserved. The resulting system retains the advantages of both purely or-parallel systems as well as purely and-parallel systems. The stack copying scheme together with our proposed memory management scheme can also be used to implement models that combine dependent and-parallelism and or-parallelism, such as Andorra and Prometheus
    • …
    corecore