257 research outputs found

    Objective Caml on .NET: The OCamIL Compiler and Toplevel

    Get PDF
    International audienceWe present the OCamIL compiler for Objective Caml that targets .NET. Our experiment consists in adding a new back-end to the INRIA Objective Caml compiler, that generates MSIL bytecode. Among all the advantages of code reuse, ensuring compatibility while keeping all the expressiveness of the original language is particularly interesting. This allowed us to bootstrap the OCamIL compiler as a .NET component and build an interactive loop (toplevel) which may be embedded within .NET applications. This work deals with typing issues, because OCamIL needs to translate an untyped intermediate language into a typed bytecode. We discuss various intermediate language retyping techniques and their consequences on performances. We also present applications of interoperability of Objective Caml and C# components

    Common Language Infrastructure for Research (CLIR): Editing and Optimizing .NET Assemblies

    Get PDF
    Programming language researchers, including code optimizers, have few tools available to manipulate .NET assembly files. This thesis presents the Common Language Infrastructure for Research comprised of three components: the Common Language Engineering Library (CLEL), the Common Language Optimizing Framework (CLOT), and a suite of utility applications. CLEL provides the means to read, edit and write .NET assemblies. CLOT, using the CLEL, provides a framework for code optimization including algorithms and data structures for three traditional optimizations. Decreases in program execution time due to application of these optimizations were achieved

    A compiler for natural semantics

    Full text link

    Abstraction-level functional programming

    Get PDF

    Language Interoperability and Logic Programming Languages

    Get PDF
    We discuss P#, our implementation of a tool which allows interoperation between a concurrent superset of the Prolog programming language and C#. This enables Prolog to be used as a native implementation language for Microsoft's .NET platform. P# compiles a linear logic extension of Prolog to C# source code. We can thus create C# objects from Prolog and use C#'s graphical, networking and other libraries. P# was developed from a modified port of the Prolog to Java translator, Prolog Cafe. We add language constructs on the Prolog side which allow concurrent Prolog code to be written. We add a primitive predicate which evaluates a Prolog structure on a newly forked thread. Communication between threads is based on the unification of variables contained in such a structure. It is also possible for threads to communicate through a globally accessible table. All of the new features are available to the programmer through new built-in Prolog predicates. We present three case studies. The first is an application which allows several users to modify a database. The users are able to disconnect from the database and to modify their own copies of the data before reconnecting. On reconnecting, conflicts must be resolved. The second is an object-oriented assistant, which allows the user to query the contents of a C# namespace or Java package. The third is a tool which allows a user to interact with a graphical display of the inheritance tree. Finally, we optimize P#'s runtime speed by translating some Prolog predicates into more idiomatic C# code than is produced by a naive port of Prolog Cafe. This is achieved by observing that semi-deterministic predicates (being those which always either fail or succeed with exactly one solution) that only call other semi-deterministic predicates enjoy relatively simple control flow. We make use of the fact that Prolog programs often contain predicates which operate as functions, and that such predicates are usually semi-deterministic

    PrologPF: Parallel Logic and Functions on the Delphi Machine

    Get PDF
    PrologPF is a parallelising compiler targeting a distributed system of general purpose workstations connected by a relatively low performance network. The source language extends standard Prolog with the integration of higher-order functions. The execution of a compiled PrologPF program proceeds in a similar manner to standard Prolog, but uses oracles in one of two modes. An oracle represents the sequence of clauses used to reach a given point in the problem search tree, and the same PrologPF executable can be used to build oracles, or follow oracles previously generated. The parallelisation strategy used by PrologPF proceeds in two phases, which this research shows can be interleaved. An initial phase searches the problem tree to a limited depth, recording the discovered incomplete paths. In the second phase these paths are allocated to the available processors in the network. Each processor follows its assigned paths and fully searches the referenced subtree, sending solutions back to a control processor. This research investigates the use of the technique with a one-time partitioning of the problem and no further scheduling communication, and with the recursive application of the partitioning technique to effect dynamic work reassignment. For a problem requiring all solutions to be found, execution completes when all the distributed processors have completed the search of their assigned subtrees. If one solution is required, the execution of all the path processors is terminated when the control processor receives the first solution. The presence of the extra-logical Prolog predicate cut in the user program conflicts with the use of oracles to represent valid open subtrees. PrologPF promotes the use of higher-order functional programming as an alternative to the use of cut. The combined language shows that functional support can be added as a consistent extension to standard Prolog

    Domain Specific Memory Management for Large Scale Data Analytics

    Get PDF
    Hardware trends over the last several decades have lead to shifting priorities with respect to performance bottlenecks in the implementations of dataflows typically present in large-scale data analytics applications. In particular, efficient use of main memory has emerged as a critical aspect of dataflow implementation, due to the proliferation of multi-core architectures, as well as the rapid development of faster-than-disk storage media. At the same time, the wealth of static domain-specific information about applications remains an untapped resource when it comes to optimizing the use of memory in a dataflow application. We propose a compilation-based approach to the synthesis of memory-efficient dataflow implementations, using static analysis to extract and leverage domain-specific information about the application. Our program transformations use the combined results of type, effect, and provenance analyses to infer time- and space- effective placement of primitive memory operations, precluding the need for dynamic memory management and its attendant costs. The experimental evaluation of implementations synthesized with our framework shows both the importance of optimizing for memory performance, as well as significant benefits of our approach, along multiple dimensions. Finally, we also demonstrate a framework for formally verifying the soundness of these transformations, laying the foundation for their use as a component of a more general implementation synthesis ecosystem

    Integrating Logic Rules with Everything Else, Seamlessly

    Full text link
    This paper presents a language, Alda, that supports all of logic rules, sets, functions, updates, and objects as seamlessly integrated built-ins. The key idea is to support predicates in rules as set-valued variables that can be used and updated in any scope, and support queries using rules as either explicit or implicit automatic calls to an inference function. We have defined a formal semantics of the language, implemented a prototype compiler that builds on an object-oriented language that supports concurrent and distributed programming and on an efficient logic rule system, and successfully used the language and implementation on benchmarks and problems from a wide variety of application domains. We describe the compilation method and results of experimental evaluation.Comment: To be published in Theory and Practice of Logic Programming, Special issue for selected papers from 39nd International Conference on Logic Programming. arXiv admin note: substantial text overlap with arXiv:2205.1520
    corecore