68,441 research outputs found

    Functional computation as concurrent computation

    Get PDF
    We investigate functional computation as a special form of concurrent computation. As formal basis, we use a uniformly confluent core of the pi-calculus, which is also contained in models of higher-order concurrent constraint programming. We embed the call-by-need and the call-by-value lambda-calculus into the pi-calculus. We prove that call-by-need complexity is dominated by call-by-value complexity. In contrast to the recently proposed call-by-need lambda-calculus, our concurrent call-by-need model incorporates mutual recursion and can be extended to cyclic data structures by means of constraints

    Making concurrency functional

    Get PDF
    The article bridges between two major paradigms in computation, the functional, at basis computation from input to output, and the interactive, where computation reacts to its environment while underway. Central to any compositional theory of interaction is the dichotomy between a system and its environment. Concurrent games and strategies address the dichotomy in fine detail, very locally, in a distributed fashion, through distinctions between Player moves (events of the system) and Opponent moves (those of the environment). A functional approach has to handle the dichotomy much more ingeniously, through its blunter distinction between input and output. This has led to a variety of functional approaches, specialised to particular interactive demands. Through concurrent games we can more clearly see what separates and connects the differing paradigms, and show how: * to lift functions to strategies; the "Scott order" intrinsic to concurrent games plays a key role in turning functional dependency to causal dependency. * several paradigms of functional programming and logic arise naturally as subcategories of concurrent games, including stable domain theory; nondeterministic dataflow; geometry of interaction; the dialectica interpretation; lenses and optics; and their extensions to containers in dependent lenses and optics. * to transfer enrichments of strategies (such as to probabilistic, quantum or real-number computation) to functional cases

    A logical foundation for session-based concurrent computation

    Get PDF
    Linear logic has long been heralded for its potential of providing a logical basis for concurrency. While over the years many research attempts were made in this regard, a Curry-Howard correspondence between linear logic and concurrent computation was only found recently, bridging the proof theory of linear logic and session-typed process calculus. Building upon this work, we have developed a theory of intuitionistic linear logic as a logical foundation for session-based concurrent computation, exploring several concurrency related phenomena such as value-dependent session types and polymorphic sessions within our logical framework in an arguably clean and elegant way, establishing with relative ease strong typing guarantees due to the logical basis, which ensure the fundamental properties of type preservation and global progress, entailing the absence of deadlocks in communication. We develop a general purpose concurrent programming language based on the logical interpretation, combining functional programming with a concurrent, session-based process layer through the form of a contextual monad, preserving our strong typing guarantees of type preservation and deadlock-freedom in the presence of general recursion and higher-order process communication. We introduce a notion of linear logical relations for session typed concurrent processes, developing an arguably uniform technique for reasoning about sophisticated properties of session-based concurrent computation such as termination or equivalence based on our logical approach, further supporting our goal of establishing intuitionistic linear logic as a logical foundation for sessionbased concurrency

    Heterogeneous concurrent computing with exportable services

    Get PDF
    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent

    Continuation-Passing C: compiling threads to events through continuations

    Get PDF
    In this paper, we introduce Continuation Passing C (CPC), a programming language for concurrent systems in which native and cooperative threads are unified and presented to the programmer as a single abstraction. The CPC compiler uses a compilation technique, based on the CPS transform, that yields efficient code and an extremely lightweight representation for contexts. We provide a proof of the correctness of our compilation scheme. We show in particular that lambda-lifting, a common compilation technique for functional languages, is also correct in an imperative language like C, under some conditions enforced by the CPC compiler. The current CPC compiler is mature enough to write substantial programs such as Hekate, a highly concurrent BitTorrent seeder. Our benchmark results show that CPC is as efficient, while using significantly less space, as the most efficient thread libraries available.Comment: Higher-Order and Symbolic Computation (2012). arXiv admin note: substantial text overlap with arXiv:1202.324

    Concurrency Controls in Event-Driven Programs

    Get PDF
    Functional reactive programming (FRP) is a programming paradigm that utilizes the concepts of functional programming and time-varying data types to create event-driven applications. In this paradigm, data types in which values can change over time are primitives and can be applied to functions. These values are composable and can be combined with functions to create values that react to changes in values from multiple sources. Events can be modeled as values that change in discrete time steps. Computation can be encoded as values that produce events, with combination operators, it enables us to write concurrent event-driven programs by combining the concurrent computation as events. Combined with the denotational approach of functional programming, we can write programs in a concise manner. The style of event-driven programming has been widely adopted for developing graphical user interface applications, since they need to process events concurrently to stay responsive. This makes FRP a fitting approach for managing complex state and handling of events concurrently. In recent years, real-time systems such as IoT (internet of things) applications have become an important field of computation. Applying FRP to real-time systems is still an active area of research.For IoT applications, they are commonly tasked to perform data capturing in real time and transmit them to other devices. They need to exchange data with other applications over the internet and respond in a timely manner. The data needs to be processed, for simple analysis or more computation intensive work such as machine learning. Designing applications that perform these tasks and remain efficient and responsive can be challenging. In this thesis, we demonstrate that FRP is a suitable approach for real-time applications. These applications require soft real-time requirements, where systems can tolerate tasks that fail to meet the deadline and the results of these tasks might still be useful.First, we design the concurrency abstractions needed for supporting asynchronous computation and use it as the basis for building the FRP abstraction. Our implementation is in Haskell, a functional programming language with a rich type system that allows us to model abstractions with ease. The concurrency abstraction is based on some of the ideas from the Haskell solution for asynchronous computation, which elegantly supports cancelation in a composable way. Based on the Haskell implementation, we extend our design with operators that are more suitable for building web applications. We translate our implementation to JavaScript as it is more commonly used for web application development, and implementing the RxJS interface. RxJS is a popular JavaScript library for reactive programming in web applications. By implementing the RxJS interface, we argue that our programming model implemented in Haskell is also applicable in mainstream languages such as JavaScript

    Context-Aware Separation Logic

    Full text link
    Separation logic is often praised for its ability to closely mimic the locality of state updates when reasoning about them at the level of assertions. The prover only needs to concern themselves with the footprint of the computation at hand, i.e., the part of the state that is actually being accessed and manipulated. Modern concurrent separation logics lift this local reasoning principle from the physical state to abstract ghost state. For instance, these logics allow one to abstract the state of a fine-grained concurrent data structure by a predicate that provides a client the illusion of atomic access to the underlying state. However, these abstractions inadvertently increase the footprint of a computation: when reasoning about a local low-level state update, one needs to account for its effect on the abstraction, which encompasses a possibly unbounded portion of the low-level state. Often this gives the reasoning a global character. We present context-aware separation logic (CASL) to provide new opportunities for local reasoning in the presence of rich ghost state abstractions. CASL introduces the notion of a context of a computation, the part of the concrete state that is only affected on the abstract level. Contexts give rise to a new proof rule that allows one to reduce the footprint by the context, provided the computation preserves the context as an invariant. The context rule complements the frame rule of separation logic by enabling more local reasoning in cases where the predicate to be framed is known in advance. We instantiate our developed theory for the flow framework, which enables local reasoning about global properties of heap graphs. We then use the instantiation to obtain a fully local proof of functional correctness for a sequential binary search tree implementation that is inspired by fine-grained concurrent search structures

    The definition of kernel Oz

    Get PDF
    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The Oz calculus models computation in Oz as rewriting of a class of expressions modulo a structural congruence. The actor model is the informal computation model underlying Oz. It introduces notions like computation spaces, actors, blackboards, and threads

    Tracing monadic computations and representing effects

    Full text link
    In functional programming, monads are supposed to encapsulate computations, effectfully producing the final result, but keeping to themselves the means of acquiring it. For various reasons, we sometimes want to reveal the internals of a computation. To make that possible, in this paper we introduce monad transformers that add the ability to automatically accumulate observations about the course of execution as an effect. We discover that if we treat the resulting trace as the actual result of the computation, we can find new functionality in existing monads, notably when working with non-terminating computations.Comment: In Proceedings MSFP 2012, arXiv:1202.240
    corecore