636 research outputs found

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Quiescent consistency: Defining and verifying relaxed linearizability

    Get PDF
    Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (non-linearizable) FIFO queue built using a diffraction tree. © 2014 Springer International Publishing Switzerland

    A Topological Treatment of Early-Deciding Set-Agreement

    Get PDF
    This paper considers the k-set-agreement problem in a synchronous message passing distributed system where up to t processes can fail by crashing. We determine the number of communication rounds needed for all correct processes to reach a decision in a given run, as a function of k, the degree of coordination, and f <= t the number of processes that actually fail in the run. We prove a lower bound of min(\floor{f/k}+2,\floor{t/k}+1) rounds. Our proof uses simple topological tools to reason about runs of a full information set-agreement protocol. In particular, we introduce a new topological operator, which we call the early deciding operator, to capture rounds where k processes fail but correct processes see only k-1 failures

    The Impending Wave of Legal Malpractice Litigation - Predictions, Analysis, and Proposals for Change.

    Get PDF
    Attorneys tend to be viewed antithetically, at once both greedy and manipulative, but also respected and admired. Given this odd mixture of respect and disdain, attorneys are fortunate to have generally avoided being targets as potential defendants. Nevertheless, circumstances in Texas have changed, creating a new legal climate wherein attorneys may soon become defendants of choice. Attorneys in Texas are at a significantly greater risk of becoming the subject of a malpractice suit than they were in the past. Yet, simply because statistics indicate an increase in the number of malpractice claims, this does not mean more malpractice is being committed or attorneys are less competent than in previous years. A variety of factors can explain the statistics, these include the disappearance of the traditional congeniality of the bar and the willingness of lawyers to bring suit against each other. Furthermore, these figures show plaintiffs’ claims today are more fact-specific and based on a myriad of legal theories spanning the entire spectrum of the attorney’s representation. Little argument can be made that the number of suits against attorneys will not increase dramatically in the next few years. For lawyers to continue to play the role of advocates in the justice system, establishing safeguards is crucial to prevent every unhappy outcome for a litigant from turning into a subsequent malpractice claim. Rather than reacting after the inundation of malpractice claims is underway, Texas and the Texas bar would be better served if proactive measures were taken. Such measures must be carefully drafted to not only provide attorneys with protection from unwarranted claims, but also to promote the public interest in ensuring truly egregious malpractice claims are brought to the attention of the bar grievance committee

    Abstract This study applied the Model of Acidification

    Get PDF
    of Groundwater in Catchments (MAGIC) to estimate the sensitivity of 66 watersheds in the Southern Blue Ridge Province of the Southern Appalachian Mountains, United States, to changes in atmospheric sulfur (S) deposition. MAGIC predicted that stream acid neutralizing capacity (ANC) values were above 20 ÎĽeq/L in all modeled watersheds in 1860. Hindcast simulations suggested that the media

    On Correctness of Data Structures under Reads-Write Concurrency

    Get PDF
    Abstract. We study the correctness of shared data structures under reads-write concurrency. A popular approach to ensuring correctness of read-only operations in the presence of concurrent update, is read-set validation, which checks that all read variables have not changed since they were first read. In practice, this approach is often too conserva-tive, which adversely affects performance. In this paper, we introduce a new framework for reasoning about correctness of data structures under reads-write concurrency, which replaces validation of the entire read-set with more general criteria. Namely, instead of verifying that all read conditions over the shared variables, which we call base conditions. We show that reading values that satisfy some base condition at every point in time implies correctness of read-only operations executing in parallel with updates. Somewhat surprisingly, the resulting correctness guarantee is not equivalent to linearizability, and is instead captured through two new conditions: validity and regularity. Roughly speaking, the former re-quires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regular-ity for arbitrary data structures, and is weaker than linearizability. We further extend our framework to capture also linearizability. We illus-trate how our framework can be applied for reasoning about correctness of a variety of implementations of data structures such as linked lists.

    Parallelizing Deadlock Resolution in Symbolic Synthesis of Distributed Programs

    Full text link
    Previous work has shown that there are two major complexity barriers in the synthesis of fault-tolerant distributed programs: (1) generation of fault-span, the set of states reachable in the presence of faults, and (2) resolving deadlock states, from where the program has no outgoing transitions. Of these, the former closely resembles with model checking and, hence, techniques for efficient verification are directly applicable to it. Hence, we focus on expediting the latter with the use of multi-core technology. We present two approaches for parallelization by considering different design choices. The first approach is based on the computation of equivalence classes of program transitions (called group computation) that are needed due to the issue of distribution (i.e., inability of processes to atomically read and write all program variables). We show that in most cases the speedup of this approach is close to the ideal speedup and in some cases it is superlinear. The second approach uses traditional technique of partitioning deadlock states among multiple threads. However, our experiments show that the speedup for this approach is small. Consequently, our analysis demonstrates that a simple approach of parallelizing the group computation is likely to be the effective method for using multi-core computing in the context of deadlock resolution
    • …
    corecore