1,258 research outputs found

    Scalable RDF Data Compression using X10

    Get PDF
    The Semantic Web comprises enormous volumes of semi-structured data elements. For interoperability, these elements are represented by long strings. Such representations are not efficient for the purposes of Semantic Web applications that perform computations over large volumes of information. A typical method for alleviating the impact of this problem is through the use of compression methods that produce more compact representations of the data. The use of dictionary encoding for this purpose is particularly prevalent in Semantic Web database systems. However, centralized implementations present performance bottlenecks, giving rise to the need for scalable, efficient distributed encoding schemes. In this paper, we describe an encoding implementation based on the asynchronous partitioned global address space (APGAS) parallel programming model. We evaluate performance on a cluster of up to 384 cores and datasets of up to 11 billion triples (1.9 TB). Compared to the state-of-art MapReduce algorithm, we demonstrate a speedup of 2.6-7.4x and excellent scalability. These results illustrate the strong potential of the APGAS model for efficient implementation of dictionary encoding and contributes to the engineering of larger scale Semantic Web applications

    Self-adjusting multi-granularity locking protocol for object-oriented databases

    Get PDF
    Object-oriented databases have the potential to be used for data-intensive, multi-user applications that are not well served by traditional applications. Despite the fact that there has been extensive research done for relational databases in the area of concurrency control; many of the approaches are not suitable for the complex data model of object-oriented databases. This thesis presents a self-adjusting multi-granularity locking protocol (SAML) which facilitates choosing an appropriate locking granule according to the requirements of the transactions and encompasses less overhead and provides better concurrency compared to some of the existing protocols. Though there has been another adaptive multi-granularity protocol called AMGL [1] which provides the same degree of concurrency as SAML: SAML has been proven to have significantly reduced the number of locks and hence the locking overhead compared to AMGL. Experimental results show that SAML performs the best when the workload is high in the system and transactions are long-lived

    A Concurrency Control Algorithm for an Open and Safe Nested Transaction Model

    Get PDF
    We present a concurrency control algorithm for an open and safe nested transaction model. We use prewrite operations in our model to increase the concurrency. Prewrite operations are modeled as subtransactions in the nested transaction tree. The subtransaction which initiates prewrite subtransactions are modelled as recovery point subtransaction. The recovery point subtransaction can release their locks before its ancestors commit. Thus, our model increases the concurrency in comparison to other nested transaction models. Our model is useful an environment of long-running transactions common in object oriented databases, computer aided design and in the software development proces

    A speculative execution approach to provide semantically aware contention management for concurrent systems

    Get PDF
    PhD ThesisMost modern platforms offer ample potention for parallel execution of concurrent programs yet concurrency control is required to exploit parallelism while maintaining program correctness. Pessimistic con- currency control featuring blocking synchronization and mutual ex- clusion, has given way to transactional memory, which allows the composition of concurrent code in a manner more intuitive for the application programmer. An important component in any transactional memory technique however is the policy for resolving conflicts on shared data, commonly referred to as the contention management policy. In this thesis, a Universal Construction is described which provides contention management for software transactional memory. The technique differs from existing approaches given that multiple execution paths are explored speculatively and in parallel. In the resolution of conflicts by state space exploration, we demonstrate that both concur- rent conflicts and semantic conflicts can be solved, promoting multi- threaded program progression. We de ne a model of computation called Many Systems, which defines the execution of concurrent threads as a state space management problem. An implementation is then presented based on concepts from the model, and we extend the implementation to incorporate nested transactions. Results are provided which compare the performance of our approach with an established contention management policy, under varying degrees of concurrent and semantic conflicts. Finally, we provide performance results from a number of search strategies, when nested transactions are introduced

    ConDRust: Scalable Deterministic Concurrency from Verifiable Rust Programs

    Get PDF
    SAT/SMT-solvers and model checkers automate formal verification of sequential programs. Formal reasoning about scalable concurrent programs is still manual and requires expert knowledge. But scalability is a fundamental requirement of current and future programs. Sequential imperative programs compose statements, function/method calls and control flow constructs. Concurrent programming models provide constructs for concurrent composition. Concurrency abstractions such as threads and synchronization primitives such as locks compose the individual parts of a concurrent program that are meant to execute in parallel. We propose to rather compose the individual parts again using sequential composition and compile this sequential composition into a concurrent one. The developer can use existing tools to formally verify the sequential program while the translated concurrent program provides the dearly requested scalability. Following this insight, we present ConDRust, a new programming model and compiler for Rust programs. The ConDRust compiler translates sequential composition into a concurrent composition based on threads and message-passing channels. During compilation, the compiler preserves the semantics of the sequential program along with much desired properties such as determinism. Our evaluation shows that our ConDRust compiler generates concurrent deterministic code that can outperform even non-deterministic programs by up to a factor of three for irregular algorithms that are particularly hard to parallelize

    Parallel run-time for CO-OPN

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaDomain Specific Modeling (DSM) is a methodology to provide programs or system’s specification at higher level of abstraction, making use of domain concepts instead of low level programming details. To support this approach, we need to have enough expressive power in terms of those domain concepts, which means that we need to develop new languages , usually termed Domain Specific Languages (DSLs). An approach to execute specifications developed using DSLs goes by applying a model transformation technique to produce a specification in another language. These transformation techniques are applied sucessively until the specification reaches a language with an implemented run-time. The language named Concurrent Object-Oriented Petri Nets (CO-OPN) is being used successfully as a target language for such model transformation techniques. CO-OPN is an object-oriented formal language for specifying concurrent systems, that separates coordination from computational tasks. CO-OPN offers mechanisms to define the system structure and behavior, and like DSLs, relieves the developer from stipulate how that structure and behavior are attained by the underlying system. The currently available code generator for CO-OPN only produces sequential code, despite of this language potential of expressing specifications rich in concurrent behavior. The generated sequential code can be executed either in a Sequential Run-Time or in the step simulator, which is part of CO-OPN Builder IDE. The generation of sequential code turns out to be an adversity to CO-OPN application since concurrent specifications cannot be executed in parallel and therefore this languages potential is not fully exploited. This dissertation aims at filling this CO-OPN’s execution gap, through the development of a Parallel Run-Time. The new Run-Time is achieved through the adaptation of the sequential code generator and actual execution support mechanisms. In this manner, all concurrent specifications that target CO-OPN benefit from thread safe code, ready for execution in parallel and distributed environments, relieving the developer from delving into parallel programming details.By guaranteeing a safe execution environment, CO-OPN becomes an alternative to the way parallel software is nowadays developed

    Exploiting semantic commutativity in hardware speculation

    Get PDF
    Hardware speculative execution schemes such as hardware transactional memory (HTM) enjoy low run-time overheads but suffer from limited concurrency because they rely on reads and writes to detect conflicts. By contrast, software speculation schemes can exploit semantic knowledge of concurrent operations to reduce conflicts. In particular, they often exploit that many operations on shared data, like insertions into sets, are semantically commutative: they produce semantically equivalent results when reordered. However, software techniques often incur unacceptable run-time overheads. To solve this dichotomy, we present COMMTM, an HTM that exploits semantic commutativity. CommTM extends the coherence protocol and conflict detection scheme to support user-defined commutative operations. Multiple cores can perform commutative operations to the same data concurrently and without conflicts. CommTM preserves transactional guarantees and can be applied to arbitrary HTMs. CommTM scales on many operations that serialize in conventional HTMs, like set insertions, reference counting, and top-K insertions, and retains the low overhead of HTMs. As a result, at 128 cores, CommTM outperforms a conventional eager-lazy HTM by up to 3.4 χ and reduces or eliminates aborts.National Science Foundation (U.S.) (Grant CAREER-1452994

    A semantically aware transactional concurrency control for GPGPU computing

    Get PDF
    PhD ThesisThe increased parallel nature of the GPU a ords an opportunity for the exploration of multi-thread computing. With the availability of GPU has recently expanded into the area of general purpose program- ming, a concurrency control is required to exploit parallelism as well as maintaining the correctness of program. Transactional memory, which is a generalised solution for concurrent con ict, meanwhile allow application programmers to develop concurrent code in a more intu- itive manner, is superior to pessimistic concurrency control for general use. The most important component in any transactional memory technique is the policy to solve con icts on shared data, namely the contention management policy. The work presented in this thesis aims to develop a software trans- actional memory model which can solve both concurrent con ict and semantic con ict at the same time for the GPU. The technique di ers from existing CPU approaches on account of the di erent essential ex- ecution paths and hardware basis, plus much more parallel resources are available on the GPU. We demonstrate that both concurrent con- icts and semantic con icts can be resolved in a particular contention management policy for the GPU, with a di erent application of locks and priorities from the CPU. The basic problem and a software transactional memory solution idea is proposed rst. An implementation is then presented based on the execution mode of this model. After that, we extend this system to re- solve semantic con ict at the same time. Results are provided nally, which compare the performance of our solution with an established GPU software transactional memory and a famous CPU transactional memory, at varying levels of concurrent and semantic con icts
    • …
    corecore