7 research outputs found

    Orca: GC and type system co-design for actor languages

    No full text
    ORCA is a concurrent and parallel garbage collector for actor programs, which does not require any stop-the-world steps, or synchronisation mechanisms, and which has been designed to support zero-copy message passing and sharing of mutable data. \ORCA is part of the runtime of the actor-based language Pony. Pony's runtime was co-designed with the Pony language. This co-design allowed us to exploit certain language properties in order to optimise performance of garbage collection. Namely, ORCA relies on the absence of race conditions in order to avoid read/write barriers, and it leverages actor message passing for synchronisation among actors. This paper describes Pony, its type system, and the ORCA garbage collection algorithm. An evaluation of the performance of ORCA suggests that it is fast and scalable for idiomatic workloads

    Challenges in using the actor model in software development, systematic literature review

    Get PDF
    Toimijamalli on hajautetun ja samanaikaisen laskennan malli, jossa pienet osat ohjelmistoa viestivät keskenään asynkronisesti ja käyttäjälle näkyvä toiminnallisuus on usean osan yhteistyöstä esiin nouseva ominaisuus. Nykypäivän ohjelmistojen täytyy kestää valtavia käyttäjämääriä ja sitä varten niiden täytyy pystyä nostamaan kapasiteettiaan nopeasti skaalautuakseen. Pienempiä ohjelmiston osia on helpompi lisätä kysynnän mukaan, joten toimijamalli vaikuttaa vastaavan tähän tarpeeseen. Toimijamallin käytössä voi kuitenkin esiintyä haasteita, joita tämä tutkimus pyrkii löytämään ja esittelemään. Tutkimus toteutetaan systemaattisena kirjallisuuskatsauksena toimijamalliin liittyvistä tutkimuksista. Valituista tutkimuksista kerättiin tietoja, joiden pohjalta tutkimuskysymyksiin vastattiin. Tutkimustulokset listaavat ja kategorisoivat ohjelmistokehityksen ongelmia, joihin käytettiin toimijamallia, sekä erilaisia toimijamallin käytössä esiintyviä haasteita ja niiden ratkaisuita. Tutkimuksessa löydettiin toimijamallin käytössä esiintyviä haasteita ja näille haasteille luotiin uusi kategorisointi. Haasteiden juurisyitä analysoidessa havaittiin, että suuri osa toimijamallin haasteista johtuvat asynkronisen viestinnän käyttämisestä, ja että ohjelmoijan on oltava jatkuvasti tarkkana omista oletuksistaan viestijärjestyksestä. Haasteisiin esitetyt ratkaisut kategorisoitiin niihin liittyvän lisättävän koodin sijainnin mukaan

    GoA: Actors with Locally Managed Memory for Go

    Get PDF
    Reasoning about concurrent programs and the way they manage memory can be difficult. Single-process programs can allocate memory without concern regarding data races or memory corruption, but multi-threaded programs must have a system in place to ensure safe memory allocation. Typically, threads and processes use a system of locks or mutexes that are explicitly managed by the user. These locks allow safe access to shared data. Languages that use Actors as a concurrency construct attempt to solve the problem without explicit locks. Actors use a system of message passing to ensure data is being shared correctly among processes. However, this message passing system requires all data to be shared by copying the data, not by reference. If references to data are to be shared safely, another safety mechanism is needed. This thesis discusses a way to bring the actor paradigm to an existing highly concurrent language, Go. The project, named GoA, is an actor-based library for Go. GoA provides actor-local memory management, and a custom system to ensure data is shared safely among actors. GoA comes with a custom memory-management library that replaces all of Go's existing allocation, garbage collection, and message passing techniques. These new methods derive from the open source language Pony, a language with a memory management system called ORCA. Inspiration is drawn from ORCA to integrate similar techniques into GoA. The custom memory manager aims to alleviate the overhead of Go's global garbage collection and allow actors to manage themselves so they do not slow down or interrupt other working actors. A memory safety system is also introduced that provides a way for all memory usage across all actors to remain safe from races and corruption. Using ideas from Pony, a capability system that annotates allocated objects with specific rules that apply when sharing data is constructed. This system allows for local objects and sharable objects. Local objects are not allowed to leave the scope of the owning actor. Shareable objects must be annotated with one of the following three capabilities: mutable, immutable, or opaque. Mutable data is free to be manipulated, immutable data can only be read, and opaque data can neither be read nor overwritten. Each of these capabilities serve their specific purposes and when declared on an object and used incorrectly, a runtime error is thrown to the user. A runtime checker system is used to check if every read and write on a variable is safe, and will handle any resulting errors Experiments and results indicate the local memory manager successfully speeds up the overall performance of the language. The basic speed benchmarks indicate the new library is slower than Go at allocating small objects, but significantly faster for allocating large objects. The N-Body garbage creation simulation showcases how GoA and its locally managed memory allows important actors to work without interference from other actors with large allocation needs, speeding up the effectiveness of the system

    Reference Capabilities for Flexible Memory Management: Extended Version

    Full text link
    Verona is a concurrent object-oriented programming language that organises all the objects in a program into a forest of isolated regions. Memory is managed locally for each region, so programmers can control a program's memory use by adjusting objects' partition into regions, and by setting each region's memory management strategy. A thread can only mutate (allocate, deallocate) objects within one active region -- its "window of mutability". Memory management costs are localised to the active region, ensuring overheads can be predicted and controlled. Moving the mutability window between regions is explicit, so code can be executed wherever it is required, yet programs remain in control of memory use. An ownership type system based on reference capabilities enforces region isolation, controlling aliasing within and between regions, yet supporting objects moving between regions and threads. Data accesses never need expensive atomic operations, and are always thread-safe.Comment: 87 pages, 10 figures, 5 listings, 4 tables. Extended version of paper to be published at OOPSLA 202

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
    corecore