563 research outputs found

    Extending the Finite Domain Solver of GNU Prolog

    No full text
    International audienceThis paper describes three significant extensions for the Finite Domain solver of GNU Prolog. First, the solver now supports negative integers. Second, the solver detects and prevents integer overflows from occurring. Third, the internal representation of sparse domains has been redesigned to overcome its current limitations. The preliminary performance evaluation shows a limited slowdown factor with respect to the initial solver. This factor is widely counterbalanced by the new possibilities and the robustness of the solver. Furthermore these results are preliminary and we propose some directions to limit this overhead

    ARM : abstract rewriting machine

    Get PDF
    Term rewriting is frequently used as implementation technique for algebraic specifications. In this paper we present the abstract term rewriting machine (ARM), which has an extremely compact instruction set and imposes no restrictions on the implemented TRSs. Apart from standard conditional term rewriting, associative lists are supported. ARM code is translated to (ANSI) C; the resulting execution speeds are good (on a sun4, an average of 80000 rewriting steps per second and a maximum of 416000 r/s were measured). Several benchmarks are shown, and related work is discussed in depth

    The performance evaluation of interpreter based computer systems

    Get PDF
    PhD ThesisThis thesis explores the problem of making accurate assessments of the performance of high level language interpreter programs which are embedded in some more complex system. The overall system performance will be determined by all the software and hardware components present; but in order either to analyse and improve particular components, or to select between alternative versions of components, the concept of the performance of individual components is important. A model is developed for the abstract behaviour of software components playing the role of an interpreter by considering their interaction with the program code which is being interpreted and with the underlying virtual machine which is, in turn, interpreting them. This model enables a flexible definition of performance by relating the interactions in which an interpreter takes part. A methodology is recommended for assessing experimentally the performances defined within such a framework. The performances of an interesting selection of pseudo-machine and high level interpreter implementations of Lispkit and Prolog are then assessed and conclusions drawn.United Kingdom Science Research Counci

    Stream Processing using Grammars and Regular Expressions

    Full text link
    In this dissertation we study regular expression based parsing and the use of grammatical specifications for the synthesis of fast, streaming string-processing programs. In the first part we develop two linear-time algorithms for regular expression based parsing with Perl-style greedy disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle and action machines, and a finite-state specialization of the streaming parsing algorithm presented in the first part. In the second part we also develop a new linear-time streaming parsing algorithm for parsing expression grammars (PEG) which generalizes the regular grammars of Kleenex. The algorithm is based on a bottom-up tabulation algorithm reformulated using least fixed points and evaluated using an instance of the chaotic iteration scheme by Cousot and Cousot

    Random Testing For Language Design

    Get PDF
    Property-based random testing can facilitate formal verification, exposing errors early on in the proving process and guiding users towards correct specifications and implementations. However, effective random testing often requires users to write custom generators for well-distributed random data satisfying complex logical predicates, a task which can be tedious and error prone. In this work, I aim to reduce the cost of property-based testing by making such generators easier to write, read and maintain. I present a domain-specific language, called Luck, in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. I also aim to increase the applicability of testing to formal verification by bringing advanced random testing techniques to the Coq proof assistant. I describe QuickChick, a QuickCheck clone for Coq, and improve it by incorporating ideas explored in the context of Luck to automatically derive provably correct generators for data constrained by inductive relations. Finally, I evaluate both QuickChick and Luck in a variety of complex case studies from programming languages literature, such as information-flow abstract machines and type systems for lambda calculi

    MASSIVELY PARALLEL ALGORITHMS FOR POINT CLOUD BASED OBJECT RECOGNITION ON HETEROGENEOUS ARCHITECTURE

    Get PDF
    With the advent of new commodity depth sensors, point cloud data processing plays an increasingly important role in object recognition and perception. However, the computational cost of point cloud data processing is extremely high due to the large data size, high dimensionality, and algorithmic complexity. To address the computational challenges of real-time processing, this work investigates the possibilities of using modern heterogeneous computing platforms and its supporting ecosystem such as massively parallel architecture (MPA), computing cluster, compute unified device architecture (CUDA), and multithreaded programming to accelerate the point cloud based object recognition. The aforementioned computing platforms would not yield high performance unless the specific features are properly utilized. Failing that the result actually produces an inferior performance. To achieve the high-speed performance in image descriptor computing, indexing, and matching in point cloud based object recognition, this work explores both coarse and fine grain level parallelism, identifies the acceptable levels of algorithmic approximation, and analyzes various performance impactors. A set of heterogeneous parallel algorithms are designed and implemented in this work. These algorithms include exact and approximate scalable massively parallel image descriptors for descriptor computing, parallel construction of k-dimensional tree (KD-tree) and the forest of KD-trees for descriptor indexing, parallel approximate nearest neighbor search (ANNS) and buffered ANNS (BANNS) on the KD-tree and the forest of KD-trees for descriptor matching. The results show that the proposed massively parallel algorithms on heterogeneous computing platforms can significantly improve the execution time performance of feature computing, indexing, and matching. Meanwhile, this work demonstrates that the heterogeneous computing architectures, with appropriate architecture specific algorithms design and optimization, have the distinct advantages of improving the performance of multimedia applications

    Parallel execution of horn claus programs

    Get PDF
    Imperial Users onl

    Compiler of a Language with User-Defined Syntax for New Constructs

    Get PDF
    Tato práce si klade za cíl navrhnout a implementovat experimentální programovací jazyk s podporou uživatelsky definovaných syntaktických konstrukcí. Nový jazyk je kompilován do nativní binární podoby a vyžaduje statickou typovou disciplínu v době překladu. Jazyk se skládá ze dvou hlavních komponent. První z nich je minimalistické jádro založené na principech zásobníkově orientovaných jazyků. Druhou částí je mechanismus pro definici nových syntaktických konstrukcí uživatelem. Poté jsou shrnuty poznatky nabyté při návrhu a experimentování s prototypem překladače tohoto jazyka.This project aims to design and implement an experimental programming language. The main feature of the language shall be the ability of the user to define new syntactic constructs. The language shall be statically typed and compiled to a native binary form. The language consists of two parts. The first part is a minimalistic core based on the principles of stack-oriented languages. The second part is a mechanism that lets users define new syntactic constructs. Then we elaborate on findings that have risen from design and experiments performed with the prototype implementation of the language.
    corecore