283 research outputs found

    Nucleotide Excision Repair, Genome Stability, and Human Disease: New Insight from Model Systems

    Get PDF
    Nucleotide excision repair (NER) is one of several DNA repair pathways that are universal throughout phylogeny. NER has a broad substrate specificity and is capable of removing several classes of lesions to the DNA, including those that accumulate upon exposure to UV radiation. The loss of this activity in NER-defective mutants gives rise to characteristic sensitivities to UV that, in humans, is manifested as a greatly elevated sensitivity to exposure to the sun. Xeroderma pigmentosum (XP), Cockaynes syndrome (CS), and trichothiodystrophy (TTD) are three, rare, recessively inherited human diseases that are linked to these defects. Interestingly, some of the symptoms in afflicted individuals appear to be due to defects in transcription, the result of the dual functionality of several components of the NER apparatus as parts of transcription factor IIH (TFIIH). Studies with several model systems have revealed that the genetic and biochemical features of NER are extraordinarily conserved in eukaryotes. One system that has been studied very closely is the budding yeast Saccharomyces cerevisiae. While many yeast NER mutants display the expected increases in UV sensitivity and defective transcription, other interesting phenotypes have also been observed. Elevated mutation and recombination rates, as well as increased frequencies of genome rearrangement by retrotransposon movement and recombination between short genomic sequences have been documented. The potential relevance of these novel phenotypes to disease in humans is discussed

    Quantifying Eventual Consistency with PBS

    Get PDF
    Data replication results in a fundamental trade-off between operation latency and consistency. At the weak end of the spectrum of possible consistency models is eventual consistency, which provides no limit to the staleness of data returned. However, anecdotally, eventual consistency is often “good enough ” for practitioners given its latency and availability benefits. In this work, we explain this phenomenon and demonstrate that, despite their weak guarantees, eventually consistent systems regularly return consistent data while providing lower latency than their strongly consistent counterparts. To quantify the behavior of eventually consistent stores, we introduce Probabilistically Bounded Staleness (PBS), a consistency model that provides expected bounds on data staleness with respect to both versions and wall clock time. We derive a closed-form solution for version-based staleness and model real-time staleness for a large class of quorum replicated, Dynamo-style stores. Using PBS, we measure the trade-off between latency and consistency for partial, non-overlapping quorum systems under Internet production workloads. We quantitatively demonstrate how and why eventually consistent systems frequently return consistent data within tens of milliseconds while offering large latency benefits. 1

    Formal Availability Analysis using Theorem Proving

    Full text link
    Availability analysis is used to assess the possible failures and their restoration process for a given system. This analysis involves the calculation of instantaneous and steady-state availabilities of the individual system components and the usage of this information along with the commonly used availability modeling techniques, such as Availability Block Diagrams (ABD) and Fault Trees (FTs) to determine the system-level availability. Traditionally, availability analyses are conducted using paper-and-pencil methods and simulation tools but they cannot ascertain absolute correctness due to their inaccuracy limitations. As a complementary approach, we propose to use the higher-order-logic theorem prover HOL4 to conduct the availability analysis of safety-critical systems. For this purpose, we present a higher-order-logic formalization of instantaneous and steady-state availability, ABD configurations and generic unavailability FT gates. For illustration purposes, these formalizations are utilized to conduct formal availability analysis of a satellite solar array, which is used as the main source of power for the Dong Fang Hong-3 (DFH-3) satellite.Comment: 16 pages. arXiv admin note: text overlap with arXiv:1505.0264

    On the Behaviour of General-Purpose Applications on Cloud Storages

    Get PDF
    Managing data over cloud infrastructures raises novel challenges with respect to existing and well studied approaches such as ACID and long running transactions. One of the main requirements is to provide availability and partition tolerance in a scenario with replicas and distributed control. This comes at the price of a weaker consistency, usually called eventual consistency. These weak memory models have proved to be suitable in a number of scenarios, such as the analysis of large data with Map-Reduce. However, due to the widespread availability of cloud infrastructures, weak storages are used not only by specialised applications but also by general purpose applications. We provide a formal approach, based on process calculi, to reason about the behaviour of programs that rely on cloud stores. For instance, one can check that the composition of a process with a cloud store ensures `strong' properties through a wise usage of asynchronous message-passing

    Making Operation-based CRDTs Operation-based

    Get PDF
    Conflict-free Replicated Datatypes can simplify the design of predictable eventual consistency. They can be classified into state-based or operation-based. Operation-based approaches have the potential for allowing compact designs in both the sent message and the object state size, but cur- rent approaches are still far from this objective. Here we explore the design space for operation-based solutions, and we leverage the interaction with the middleware by offering a technique that delivers very compact solutions, while only broadcasting operation names and arguments.(undefined)(undefined

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p
    • …
    corecore