9 research outputs found

    Lock Inference Proven Correct

    No full text
    With the introduction of multi-core CPUs, multi-threaded programming is becoming significantly more popular. Unfortunately, it is difficult for programmers to ensure their code is correct because current languages are too low-level. Atomic sections are a recent language primitive that expose a higher level interface to programmers. Thus they make concurrent programming more straightforward. Atomic sections can be compiled using transactional memory or lock inference, but ensuring correctness and good performance is a challenge. Transactional memory has problems with IO and contention, whereas lock inference algorithms are often too imprecise which translates to a loss of parallelism at runtime. We define a lock inference algorithm that has good precision. We give the operational semantics of a model OO language, and define a notion of correctness for our algorithm. We then prove correctness using Isabelle/HOL

    Concurrent data representation synthesis

    Get PDF
    We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a high-level optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark

    Reasoning About Lock Placements

    Get PDF
    Abstract. A lock placement describes, for each heap location, which lock guards the location, and under what circumstances. We formalize methods for reasoning about lock placements, making precise the interacting obligations between the program, the organization of the heap, and the placement of locks. Our methods capture realistic and subtle situations, such as the placement and correct use of speculative locks and lock assignments that change dynamically with updates to the heap. We present results for flat heaps with no structure, tree-structured heaps and a language of DAG-shaped heaps with bounded in-degree

    Nested pessimistic transactions for both atomicity and synchronization in concurrent software

    Get PDF
    Existing atomic section interface proposals, thus far, have tended to only isolate transactions from each other. Less considered is the coordination of threads performing transactions with respect to one another. Synchronization of nested sections is typically relegated to outside of and among the top-level flattened sections. However existing models do not permit the composition of even simple synchronization constructs such as barriers. The proposed model integrates synchronization as a first-class construct in a truly nested atomic block implementation. The implementation is evaluated on quantitative benchmarks, with qualitative examples of the atomic section interface’s expressive power compared with conventional transactional memory implementations.1 yea

    Distributed Systems on the .NET Framework Platform

    Get PDF
    S rozvojem internetové komunikace a s tím související dostupností stále většího množství služeb postavených na různých technologiích, představují distribuované systémy řešení, jak tyto síťově dostupné služby integrovat a poskytnout je uživatelům v ucelené podobě. K tomuto účelu lze využít platformu .NET Framework, která přináší prostředí určené pro vývoj aplikací ve vysoce distribuovaném prostředí internetu a intranetu. Tato dizertační práce se zabývá problematikou přístupu ke sdíleným prostředkům v rámci distribuovaných systémů využívající platformu .NET. První část práce je věnována popisu základních principů distribuovaných systémů a technik platformy .NET, kterých lze užít pro implementaci těchto principů. Pro účely zpracování požadavků nejen v distribuovaných systémech mající obvykle asynchronní charakter bylo navrhnuto a realizováno univerzální rozhraní pro popis asynchronních operací rozšiřující standardní techniky platformy .NET. V rámci řešení problematiky přístupu ke sdíleným prostředkům byl navržen model pro přístup ke sdíleným prostředkům vycházející z principů objektově orientovaného programování spolu se základním algoritmem pro zamezení stavu uváznutí při využívání prostředků více procesy (vlákny) současně. Tento rozšiřitelný model byl úspěšně implementován a jeho funkčnost ověřena na základních scénářích přístupu ke sdíleným zdrojům. Implementovaný model umožňuje po prvotní definici prostředků s těmito prostředky následně pracovat jako s každými jinými objekty, kdy synchronizační mechanismy probíhají transparentně na pozadí.With the expansion of the Internet communication and related availability of increasing number of services built on different technologies, distributed systems represent a solution to integrate these network services and provide them to users in a coherent form. The .NET Framework which provides an environment for application development in a highly distributed environment of Internet and intranet can be used to achieve this goal. This PhD thesis deals with access to shared resources in the context of distributed systems using the .NET platform. The first part of the work is devoted to describing the basic principles of distributed systems and .NET platform techniques, which can be used for implementation of the principles. For the purposes of request processing having asynchronous nature not only in distributed systems a universal interface for the description of asynchronous operations was designed and implemented. The interface extends standard asynchronous techniques on the .NET platform. In order to address the issue of access to shared resources model was designed based on the principles of object-oriented programming, along with basic algorithm to avoid deadlock in the case of use resources by multiple processes (threads) simultaneously. This extendable model has been successfully implemented and its functionality verified in basic scenarios of access to shared resources. After the definition of resources and their dependencies the implemented model allows working with resources as with any other objects on .NET platform. The synchronization processes proceed transparently in background.

    Lock Inference for Java

    No full text
    Atomicity is an important property for concurrent software, as it provides a stronger guarantee against errors caused by unanticipated thread interactions than race-freedom does. However, concurrency control in general is tricky to get right because current techniques are too low-level and error-prone. With the introduction of multicore processors, the problems are compounded. Consequently, a new software abstraction is gaining popularity to take care of concurrency control and the enforcing of atomicity properties, called atomic sections. One possible implementation of their semantics is to acquire a global lock upon entry to each atomic section, ensuring that they execute in mutual exclusion. However, this cripples concurrency, as non-interfering atomic sections cannot run in parallel. Transactional memory is another automated technique for providing atomicity, but relies on the ability to rollback conflicting atomic sections and thus places restrictions on the use of irreversible operations, such as I/O and system calls, or serialises all sections that use such features. Therefore, from a language designer's point of view, the challenge is to implement atomic sections without compromising performance or expressivity. This thesis explores the technique of lock inference, which infers a set of locks for each atomic section, while attempting to balance the requirements of maximal concurrency, minimal locking overhead and freedom from deadlock. We focus on lock-inference techniques for tackling large Java programs that make use of mature libraries. This improves upon existing work, which either (i) ignores libraries, (ii) requires library implementors to annotate which locks to take, or (iii) only considers accesses performed up to one-level deep in library call chains. As a result, each of these prior approaches may result in atomicity violations. This is a problem because even simple uses of I/O in Java programs can involve large amounts of library code. Our approach is the first to analyse library methods in full and thus able to soundly handle atomic sections involving complicated real-world side effects, while still permitting atomic sections to run concurrently in cases where their lock sets are disjoint. To validate our claims, we have implemented our techniques in Lockguard, a fully automatic tool that translates Java bytecode containing atomic sections to an equivalent program that uses locks instead. We show that our techniques scale well and despite protecting all library accesses, we obtain performance comparable to the original locking policy of our benchmarks

    Software Transactional Memory Building Blocks

    Get PDF
    Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks. I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC

    Component-Based Lock Allocation

    No full text
    The allocation of lock objects to critical sections in concurrent programs affects both performance and correctness. Recent work explores automatic lock allocation, aiming primarily to minimize conflicts and maximize parallelism by allocating locks to individual critical section interferences. We investigate component-based lock allocation, which allocates locks to entire groups of interfering critical sections. Our allocator depends on a thread-based side effect analysis, and benefits from precise points-to and may happen in parallel information. Thread-local object information has a small impact, and dynamic locks do not improve significantly on static locks. We experiment with a range of small and large Java benchmarks on 2-way, 4-way, and 8-way machines, and find that a single static lock is sufficient for mtrt, that performance degrades by 10 % for hsqldb, that jbb2000 becomes mostly serialized, and that for lusearch, xalan, and jbb2005, component-based lock allocation recovers the performance of the original program. 1
    corecore