1,027 research outputs found

    The role of concurrency in an evolutionary view of programming abstractions

    Full text link
    In this paper we examine how concurrency has been embodied in mainstream programming languages. In particular, we rely on the evolutionary talking borrowed from biology to discuss major historical landmarks and crucial concepts that shaped the development of programming languages. We examine the general development process, occasionally deepening into some language, trying to uncover evolutionary lineages related to specific programming traits. We mainly focus on concurrency, discussing the different abstraction levels involved in present-day concurrent programming and emphasizing the fact that they correspond to different levels of explanation. We then comment on the role of theoretical research on the quest for suitable programming abstractions, recalling the importance of changing the working framework and the way of looking every so often. This paper is not meant to be a survey of modern mainstream programming languages: it would be very incomplete in that sense. It aims instead at pointing out a number of remarks and connect them under an evolutionary perspective, in order to grasp a unifying, but not simplistic, view of the programming languages development process

    Garbage Collection for General Graphs

    Get PDF
    Garbage collection is moving from being a utility to a requirement of every modern programming language. With multi-core and distributed systems, most programs written recently are heavily multi-threaded and distributed. Distributed and multi-threaded programs are called concurrent programs. Manual memory management is cumbersome and difficult in concurrent programs. Concurrent programming is characterized by multiple independent processes/threads, communication between processes/threads, and uncertainty in the order of concurrent operations. The uncertainty in the order of operations makes manual memory management of concurrent programs difficult. A popular alternative to garbage collection in concurrent programs is to use smart pointers. Smart pointers can collect all garbage only if developer identifies cycles being created in the reference graph. Smart pointer usage does not guarantee protection from memory leaks unless cycle can be detected as process/thread create them. General garbage collectors, on the other hand, can avoid memory leaks, dangling pointers, and double deletion problems in any programming environment without help from the programmer. Concurrent programming is used in shared memory and distributed memory systems. State of the art shared memory systems use a single concurrent garbage collector thread that processes the reference graph. Distributed memory systems have very few complete garbage collection algorithms and those that exist use global barriers, are centralized and do not scale well. This thesis focuses on designing garbage collection algorithms for shared memory and distributed memory systems that satisfy the following properties: concurrent, parallel, scalable, localized (decentralized), low pause time, high promptness, no global synchronization, safe, complete, and operates in linear time

    Efficient and Reasonable Object-Oriented Concurrency

    Full text link
    Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis an efficient runtime and optimization pass that improve performance 15x over a baseline implementation. This new implementation of SCOOP is also 2x faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.Comment: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE '15). ACM, 201

    Performance regression testing of concurrent classes

    Full text link
    Developers of thread-safe classes struggle with two oppos-ing goals. The class must be correct, which requires syn-chronizing concurrent accesses, and the class should pro-vide reasonable performance, which is difficult to realize in the presence of unnecessary synchronization. Validating the performance of a thread-safe class is challenging because it requires diverse workloads that use the class, because ex-isting performance analysis techniques focus on individual bottleneck methods, and because reliably measuring the per-formance of concurrent executions is difficult. This paper presents SpeedGun, an automatic performance regression testing technique for thread-safe classes. The key idea is to generate multi-threaded performance tests and to com-pare two versions of a class with each other. The analysis notifies developers when changing a thread-safe class signif-icantly influences the performance of clients of this class. An evaluation with 113 pairs of classes from popular Java projects shows that the analysis effectively identifies 13 per-formance differences, including performance regressions that the respective developers were not aware of

    Stage: Python with Actors

    No full text

    Meta-evaluation of Actors with Side-effects

    Get PDF
    This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N000-14-74-C-0643.Meta-evaluation is a process which symbolically evaluates an actor and checks to see whether the actor fulfills its contract (specification). A formalism for writing contracts for actors with side-effects which allow sharing of data is presented. Typical examples of actors with side-effects are the cell, actor counterparts of the LISP function rplaca and rplacd, and procedures whose computation depends upon their input history. Meta-evaluation of actors with side-effects is carried out by using situational tags which denotes a situation (local state of an actor systems at the moment of the transmissions of messages). It is illustrated how the situational tags are used for proving the termination of the activation of actors.MIT Artificial Intelligence Laborator

    GoA: Actors with Locally Managed Memory for Go

    Get PDF
    Reasoning about concurrent programs and the way they manage memory can be difficult. Single-process programs can allocate memory without concern regarding data races or memory corruption, but multi-threaded programs must have a system in place to ensure safe memory allocation. Typically, threads and processes use a system of locks or mutexes that are explicitly managed by the user. These locks allow safe access to shared data. Languages that use Actors as a concurrency construct attempt to solve the problem without explicit locks. Actors use a system of message passing to ensure data is being shared correctly among processes. However, this message passing system requires all data to be shared by copying the data, not by reference. If references to data are to be shared safely, another safety mechanism is needed. This thesis discusses a way to bring the actor paradigm to an existing highly concurrent language, Go. The project, named GoA, is an actor-based library for Go. GoA provides actor-local memory management, and a custom system to ensure data is shared safely among actors. GoA comes with a custom memory-management library that replaces all of Go's existing allocation, garbage collection, and message passing techniques. These new methods derive from the open source language Pony, a language with a memory management system called ORCA. Inspiration is drawn from ORCA to integrate similar techniques into GoA. The custom memory manager aims to alleviate the overhead of Go's global garbage collection and allow actors to manage themselves so they do not slow down or interrupt other working actors. A memory safety system is also introduced that provides a way for all memory usage across all actors to remain safe from races and corruption. Using ideas from Pony, a capability system that annotates allocated objects with specific rules that apply when sharing data is constructed. This system allows for local objects and sharable objects. Local objects are not allowed to leave the scope of the owning actor. Shareable objects must be annotated with one of the following three capabilities: mutable, immutable, or opaque. Mutable data is free to be manipulated, immutable data can only be read, and opaque data can neither be read nor overwritten. Each of these capabilities serve their specific purposes and when declared on an object and used incorrectly, a runtime error is thrown to the user. A runtime checker system is used to check if every read and write on a variable is safe, and will handle any resulting errors Experiments and results indicate the local memory manager successfully speeds up the overall performance of the language. The basic speed benchmarks indicate the new library is slower than Go at allocating small objects, but significantly faster for allocating large objects. The N-Body garbage creation simulation showcases how GoA and its locally managed memory allows important actors to work without interference from other actors with large allocation needs, speeding up the effectiveness of the system

    Orca: GC and type system co-design for actor languages

    No full text
    ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any stop-the-world steps, or synchronisation mechanisms, and which has been designed to support zero-copy message passing and sharing of mutable data. \ORCA is part of the runtime of the actor-based language Pony. Pony's runtime was co-designed with the Pony language. This co-design allowed us to exploit certain language properties in order to optimise performance of garbage collection. Namely, ORCA relies on the absence of race conditions in order to avoid read/write barriers, and it leverages actor message passing for synchronisation among actors. This paper describes Pony, its type system, and the ORCA garbage collection algorithm. An evaluation of the performance of ORCA suggests that it is fast and scalable for idiomatic workloads

    Deploying active objects onto multicore

    Get PDF
    The performance of a program on multicore platform crucially depends on the scheduling of its tasks; existing high-level programming languages, however, offer limited control over scheduling. In this thesis, we develop Cacoj as an extensible tool set to transform Creol’s active concurrent objects into Java to be deployed on multicore through standard Java Runtime Environment. The concurrent object paradigm is a promising trend for multicore programming because each object may conceptually encapsulate a processor. Cacoj introduces a higher-level abstraction of concurrency API and a Creol compiler in which the translated object in Java takes control over the scheduling of the incoming messages through a per-object approach in contrast with current mainstream trend. Cacoj brings about the required grounds to extend Creol syntax to additionally specify different levels of priority and integrate them into the notion of active concurrent objects
    • …
    corecore