31,420 research outputs found

    Speculative Concurrency Control for Real-Time Databases

    Full text link
    In this paper, we propose a new class of Concurrency Control Algorithms that is especially suited for real-time database applications. Our approach relies on the use of (potentially) redundant computations to ensure that serializable schedules are found and executed as early as possible, thus, increasing the chances of a timely commitment of transactions with strict timing constraints. Due to its nature, we term our concurrency control algorithms Speculative. The aforementioned description encompasses many algorithms that we call collectively Speculative Concurrency Control (SCC) algorithms. SCC algorithms combine the advantages of both Pessimistic and Optimistic Concurrency Control (PCC and OCC) algorithms, while avoiding their disadvantages. On the one hand, SCC resembles PCC in that conflicts are detected as early as possible, thus making alternative schedules available in a timely fashion in case they are needed. On the other hand, SCC resembles OCC in that it allows conflicting transactions to proceed concurrently, thus avoiding unnecessary delays that may jeopardize their timely commitment

    Saturation Effects and the Concurrency Hypothesis: Insights from an Analytic Model

    Full text link
    Sexual partnerships that overlap in time (concurrent relationships) may play a significant role in the HIV epidemic, but the precise effect is unclear. We derive edge-based compartmental models of disease spread in idealized dynamic populations with and without concurrency to allow for an investigation of its effects. Our models assume that partnerships change in time and individuals enter and leave the at-risk population. Infected individuals transmit at a constant per-partnership rate to their susceptible partners. In our idealized populations we find regions of parameter space where the existence of concurrent partnerships leads to substantially faster growth and higher equilibrium levels, but also regions in which the existence of concurrent partnerships has very little impact on the growth or the equilibrium. Additionally we find mixed regimes in which concurrency significantly increases the early growth, but has little effect on the ultimate equilibrium level. Guided by model predictions, we discuss general conditions under which concurrent relationships would be expected to have large or small effects in real-world settings. Our observation that the impact of concurrency saturates suggests that concurrency-reducing interventions may be most effective in populations with low to moderate concurrency

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Clarifying and compiling C/C++ concurrency: from C++11 to POWER

    Get PDF
    The upcoming C and C++ revised standards add concurrency to the languages, for the first time, in the form of a subtle *relaxed memory model* (the *C++11 model*). This aims to permit compiler optimisation and to accommodate the differing relaxed-memory behaviours of mainstream multiprocessors, combining simple semantics for most code with high-performance *low-level atomics* for concurrency libraries. In this paper, we first establish two simpler but provably equivalent models for C++11, one for the full language and another for the subset without consume operations. Subsetting further to the fragment without low-level atomics, we identify a subtlety arising from atomic initialisation and prove that, under an additional condition, the model is equivalent to sequential consistency for race-free programs

    Four domains for concurrency

    Get PDF
    AbstractWe give four domains for concurrency in a uniform way by means of domain equations. The domains are intended for modelling the four possible combinations of linear time versus branching time, and of interleaving versus noninterleaving concurrency. We use the linear time, noninterleaved domain to give operational and denotational semantics for a simple concurrent language with recursion, and prove that O = D

    Four domains for concurrency

    Get PDF
    AbstractWe give four domains for concurrency in a uniform way by means of domain equations. The domains are intended for modelling the four possible combinations of linear time versus branching time, and of interleaving versus noninterleaving concurrency. We use the linear time, noninterleaved domain to give operational and denotational semantics for a simple concurrent language with recursion, and prove that O = D
    corecore