9,483 research outputs found

    A Study of Concurrency Bugs and Advanced Development Support for Actor-based Programs

    Full text link
    The actor model is an attractive foundation for developing concurrent applications because actors are isolated concurrent entities that communicate through asynchronous messages and do not share state. Thereby, they avoid concurrency bugs such as data races, but are not immune to concurrency bugs in general. This study taxonomizes concurrency bugs in actor-based programs reported in literature. Furthermore, it analyzes the bugs to identify the patterns causing them as well as their observable behavior. Based on this taxonomy, we further analyze the literature and find that current approaches to static analysis and testing focus on communication deadlocks and message protocol violations. However, they do not provide solutions to identify livelocks and behavioral deadlocks. The insights obtained in this study can be used to improve debugging support for actor-based programs with new debugging techniques to identify the root cause of complex concurrency bugs.Comment: - Submitted for review - Removed section 6 "Research Roadmap for Debuggers", its content was summarized in the Future Work section - Added references for section 1, section 3, section 4.3 and section 5.1 - Updated citation

    Life of occam-Pi

    Get PDF
    This paper considers some questions prompted by a brief review of the history of computing. Why is programming so hard? Why is concurrency considered an “advanced” subject? What’s the matter with Objects? Where did all the Maths go? In searching for answers, the paper looks at some concerns over fundamental ideas within object orientation (as represented by modern programming languages), before focussing on the concurrency model of communicating processes and its particular expression in the occam family of languages. In that focus, it looks at the history of occam, its underlying philosophy (Ockham’s Razor), its semantic foundation on Hoare’s CSP, its principles of process oriented design and its development over almost three decades into occam-? (which blends in the concurrency dynamics of Milner’s ?-calculus). Also presented will be an urgent need for rationalisation – occam-? is an experiment that has demonstrated significant results, but now needs time to be spent on careful review and implementing the conclusions of that review. Finally, the future is considered. In particular, is there a future

    Garbage collection auto-tuning for Java MapReduce on Multi-Cores

    Get PDF
    MapReduce has been widely accepted as a simple programming pattern that can form the basis for efficient, large-scale, distributed data processing. The success of the MapReduce pattern has led to a variety of implementations for different computational scenarios. In this paper we present MRJ, a MapReduce Java framework for multi-core architectures. We evaluate its scalability on a four-core, hyperthreaded Intel Core i7 processor, using a set of standard MapReduce benchmarks. We investigate the significant impact that Java runtime garbage collection has on the performance and scalability of MRJ. We propose the use of memory management auto-tuning techniques based on machine learning. With our auto-tuning approach, we are able to achieve MRJ performance within 10% of optimal on 75% of our benchmark tests

    Inferring Types to Eliminate Ownership Checks in an Intentional JavaScript Compiler

    Get PDF
    Concurrent programs are notoriously difficult to develop due to the non-deterministic nature of thread scheduling. It is desirable to have a programming language to make such development easier. Tscript comprises such a system. Tscript is an extension of JavaScript that provides multithreading support along with intent specification. These intents allow a programmer to specify how parts of the program interact in a multithreaded context. However, enforcing intents requires run-time memory checks which can be inefficient. This thesis implements an optimization in the Tscript compiler that seeks to improve this inefficiency through static analysis. Our approach utilizes both type inference and dataflow analysis to eliminate unnecessary run-time checks

    FlashLight: A Dynamic Detector of Shared State, Race Conditions, and Locking Models in Concurrent Java Programs

    Get PDF
    Concurrent Java programs are difficult to understand and implement correctly. This difficultly leads to code faults that are the source of many real-world reliability and security problems. Many factors contribute to concurrency faults in Java code; for example, programmers may not understand Java language semantics or, when using a Java library or framework, may not understand that their resulting program is concurrent. This thesis describes a dynamic analysis tool, named FlashLight, that detects shared state and possible race conditions within a program. FlashLight illuminates the concurrency within a program for programmers who are wholly or partially in the dark about their software\u27s concurrency. FlashLight also works in concert with the Fluid assurance tool to propose Greenhouse-style lock policy models based on a program\u27s observed locking behavior. After review by a programmer to ensure reasonableness, these models can be verified by the Fluid assurance tool. The author\u27s combination of a dynamic tool with a program verification system focused on concurrency fault detection and repair is the primary contribution of this research. He applied FlashLight to several concurrent Java programs, including a large commercial web application server. His case study experiences induced him to improve FlashLight to do the following: (1) allow the programmer to specify interesting time quantums (e.g., this is the start-up phase of the program), and (2) support the common Java programming idiom of not locking shared state during object construction. Both improvements help to reduce false positives. FlashLight introduces an overhead of roughly 1.7 times the original execution time of the program. The most significant limitation of FlashLight is that it is not fully integrated into the Fluid assurance tool with respect to the user experience

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming
    corecore