14 research outputs found

    Ajitts: adaptive just-in-time transaction scheduling

    Get PDF
    Lecture Notes in Computer Science 7891, 2013Distributed transaction processing has benefited greatly from optimistic concurrency control protocols thus avoiding costly fine-grained synchronization. However, the performance of these protocols degrades significantly when the workload increases, namely, by leading to a substantial amount of aborted transactions due to concurrency conflicts. Our approach stems from the observation that when the abort rate increases with the load as already executed transactions queue for longer periods of time waiting for their turn to be certified and committed. We thus propose an adaptive algorithm for judiciously scheduling transactions to minimize the time during which these are vulnerable to being aborted by concurrent transactions, thereby reducing the overall abort rate. We do so by throttling transaction execution using an adaptive mechanism based on the locally known state of globally executing transactions, that includes out-of-order execution. Our evaluation using traces from the industry standard TPC-E workload shows that the amount of aborted transactions can be kept bounded as system load increases, while at the same time fully utilizing system resources and thus scaling transaction processing throughput.(undefined

    Using lightweight modeling to understand chord

    Full text link

    Transactional failure recovery for a distributed key-value store

    Get PDF
    With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance

    Does Simple Binary Crossover Hasten RCGA Convergence?

    No full text
    Real Coded Genetic Algorithm, RCGA, is the type of GA which operates on chromosomes with real valued parameters. Different mutation and crossover operations are defined for RCGA. One usable crossover for this kind of GA is to consider its chromosomes simply as bit strings and utilize the same operations as Binary Coded GA. In this paper, we attempt to show that this kind of crossover can not hasten the convergence process unless the break points fall at the boundaries of parameters in the chromosome

    Investigation on some maternal factors affecting the birth of preterm infants: a case - control study

    No full text
    Background: Infant mortality is considered as the key healthcare index in every country. The outcomes of a preterm birth are among the main and direct causes of neonate mortality. Therefore, the present research aims to investigate some maternal factors influencing the immature birth. Materials and Methods: This observational case study was conducted on 100 term babies as the control group. The questionnaires were completed through interviewed mothers or perused hospital files. Results: The results of this study showed the high chances of premature birth in women with multiple pregnancies, smoking, placenta previa, uterine problems and placental abruption compared to most of the mothers with no history of such problems. In mothers with cervical incompetence, the chances of delivering a preterm baby are 11 times as high as mothers with no such problems. Similarly, the chances are 9.33 times as high among the mothers who had a history of placenta previa. Conclusion: Identifying maternal factors influencing the preterm infant birth as well as attentive care taken during pregnancy can significantly reduce the preterm infant birth

    Towards Communication-Based Steering of Complex Distributed Systems

    No full text
    Abstract. Quantitative verification is an established automated technique that can ensure predictability and dependability of software systems which exhibit probabilistic behaviour. Since offline usage of quantitative verification is infeasible for large-scale complex systems that continuously adapt to the changing environment, quantitative runtime verification was proposed as an alternative. Using an illustrative case study of communicating, distributed probabilistic processes, we formulate the problem of quantitative steering, a runtime technique that involves system monitoring, prediction of future errors, and enforcement of system’s behaviour away from the error states. We consider a communicationbased variant of steering where enforcement is achieved by modifying the contents of communication channels. Our approach is based on stochastic games, where one player is the system and the other players assume the role of the controller, and hence steering reduces to finding a controller strategy that meets the given quantitative goal. We discuss the solution to the quantitative steering problem and its extensions inspired by complex real-world scenarios.

    Verifying Systems Rules Using Rule-Directed Symbolic Execution

    No full text
    Systems code must obey many rules, such as “opened files must be closed. ” One approach to verifying rules is static analysis, but this technique cannot infer precise runtime effects of code, often emitting many false positives. An alternative is symbolic execution, a technique that verifies program paths over all inputs up to a bounded size. However, when applied to verify rules, existing symbolic execution systems often blindly explore many redundant program paths while missing relevant ones that may contain bugs. Our key insight is that only a small portion of paths are relevant to rules, and the rest (majority) of paths are irrelevant and do not need to be verified. Based on this insight, we create WOOD-PECKER, a new symbolic execution system for effectively checking rules on systems programs. It provides a set of builtin checkers for common rules, and an interface for users to easily check new rules. It directs symbolic execution toward the program paths relevant to a checked rule, and soundly prunes redundant paths, exponentially speeding up symbolic execution. It is designed to be heuristic-agnostic, enabling users to leverage existing powerful search heuristics. Evaluation on 136 systems programs totaling 545K lines of code, including some of the most widely used programs, shows that, with a time limit of typically just one hour for each verification run, WOODPECKER effectively verifies 28.7 % of the program and rule combinations over bounded input, whereas an existing symbolic execution system KLEE verifies only 8.5%. For the remaining combinations, WOODPECKER verifies 4.6 times as many relevant paths as KLEE. With a longer time limit, WOODPECKER verifies much more paths than KLEE, e.g., 17 times as many with a fourhour limit. WOODPECKER detects 113 rule violations, including 10 serious data loss errors with 2 most serious ones already confirmed by the corresponding developers
    corecore