750 research outputs found

    Relating L-Resilience and Wait-Freedom via Hitting Sets

    Full text link
    The condition of t-resilience stipulates that an n-process program is only obliged to make progress when at least n-t processes are correct. Put another way, the live sets, the collection of process sets such that progress is required if all the processes in one of these sets are correct, are all sets with at least n-t processes. We show that the ability of arbitrary collection of live sets L to solve distributed tasks is tightly related to the minimum hitting set of L, a minimum cardinality subset of processes that has a non-empty intersection with every live set. Thus, finding the computing power of L is NP-complete. For the special case of colorless tasks that allow participating processes to adopt input or output values of each other, we use a simple simulation to show that a task can be solved L-resiliently if and only if it can be solved (h-1)-resiliently, where h is the size of the minimum hitting set of L. For general tasks, we characterize L-resilient solvability of tasks with respect to a limited notion of weak solvability: in every execution where all processes in some set in L are correct, outputs must be produced for every process in some (possibly different) participating set in L. Given a task T, we construct another task T_L such that T is solvable weakly L-resiliently if and only if T_L is solvable weakly wait-free

    Strong Equivalence Relations for Iterated Models

    Full text link
    The Iterated Immediate Snapshot model (IIS), due to its elegant geometrical representation, has become standard for applying topological reasoning to distributed computing. Its modular structure makes it easier to analyze than the more realistic (non-iterated) read-write Atomic-Snapshot memory model (AS). It is known that AS and IIS are equivalent with respect to \emph{wait-free task} computability: a distributed task is solvable in AS if and only if it solvable in IIS. We observe, however, that this equivalence is not sufficient in order to explore solvability of tasks in \emph{sub-models} of AS (i.e. proper subsets of its runs) or computability of \emph{long-lived} objects, and a stronger equivalence relation is needed. In this paper, we consider \emph{adversarial} sub-models of AS and IIS specified by the sets of processes that can be \emph{correct} in a model run. We show that AS and IIS are equivalent in a strong way: a (possibly long-lived) object is implementable in AS under a given adversary if and only if it is implementable in IIS under the same adversary. %This holds whether the object is one-shot or long-lived. Therefore, the computability of any object in shared memory under an adversarial AS scheduler can be equivalently investigated in IIS

    Admit your weakness: Verifying correctness on TSO architectures

    Get PDF
    “The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification

    Sustained E2F-Dependent Transcription Is a Key Mechanism to Prevent Replication-Stress-Induced DNA Damage

    Get PDF
    Recent work established DNA replication stress as a crucial driver of genomic instability and a key event at the onset of cancer. Post-translational modifications play an important role in the cellular response to replication stress by regulating the activity of key components to prevent replication-stress-induced DNA damage. Here, we establish a far greater role for transcriptional control in determining the outcome of replication-stress-induced events than previously suspected. Sustained E2F-dependent transcription is both required and sufficient for many crucial checkpoint functions, including fork stalling, stabilization, and resolution. Importantly, we also find that, in the context of oncogene-induced replication stress, where increased E2F activity is thought to cause replication stress, E2F activity is required to limit levels of DNA damage. These data suggest a model in which cells experiencing oncogene-induced replication stress through deregulation of E2F-dependent transcription become addicted to E2F activity to cope with high levels of replication stress

    The effect of exogenous glucose infusion on early embryonic development in lactating dairy cows

    Get PDF
    peer-reviewedThe objective of this study was to examine the effect of intravenous infusion of glucose on early embryonic development in lactating dairy cows. Nonpregnant, lactating dairy cows (n = 12) were enrolled in the study (276 ± 17 d in milk). On d 7 after a synchronized estrus, cows were randomly assigned to receive an intravenous infusion of either 750 g/d of exogenous glucose (GLUC; 78 mL/h of 40% glucose wt/vol) or saline (CTRL; 78 mL/h of 0.9% saline solution). The infusion period lasted 7 d and cows were confined to metabolism stalls for the duration of the study. Coincident with the commencement of the infusion on d 7 after estrus, 15 in vitro-produced grade 1 blastocysts were transferred into the uterine horn ipsilateral to the corpus luteum. All animals were slaughtered on d 14 to recover conceptuses, uterine fluid, and endometrial tissue. Glucose infusion increased circulating glucose concentrations (4.70 ± 0.12 vs. 4.15 ± 0.12 mmol/L) but did not affect milk production or dry matter intake. Circulating β-hydroxybutyrate concentrations were decreased (0.51 ± 0.01 vs. 0.70 ± 0.01 mmol/L for GLUC vs. CTRL, respectively) but plasma fatty acids, progesterone, and insulin concentrations were unaffected by treatment. Treatment did not affect either uterine lumen fluid glucose concentration or the mRNA abundance of specific glucose transporters in the endometrium. Mean conceptus length, width, and area on d 14 were reduced in the GLUC treatment compared with the CTRL treatment. A greater proportion of embryos in the CTRL group had elongated to all length cut-off measurements between 11 and 20 mm (measured in 1-mm increments) compared with the GLUC treatment. In conclusion, infusion of glucose into lactating dairy cows from d 7 to d 14 post-estrus during the critical period of conceptus elongation had an adverse impact on early embryonic development

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    On Correctness of Data Structures under Reads-Write Concurrency

    Get PDF
    Abstract. We study the correctness of shared data structures under reads-write concurrency. A popular approach to ensuring correctness of read-only operations in the presence of concurrent update, is read-set validation, which checks that all read variables have not changed since they were first read. In practice, this approach is often too conserva-tive, which adversely affects performance. In this paper, we introduce a new framework for reasoning about correctness of data structures under reads-write concurrency, which replaces validation of the entire read-set with more general criteria. Namely, instead of verifying that all read conditions over the shared variables, which we call base conditions. We show that reading values that satisfy some base condition at every point in time implies correctness of read-only operations executing in parallel with updates. Somewhat surprisingly, the resulting correctness guarantee is not equivalent to linearizability, and is instead captured through two new conditions: validity and regularity. Roughly speaking, the former re-quires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regular-ity for arbitrary data structures, and is weaker than linearizability. We further extend our framework to capture also linearizability. We illus-trate how our framework can be applied for reasoning about correctness of a variety of implementations of data structures such as linked lists.

    A comparison of transient elastography with acoustic radiation force impulse elastography for the assessment of liver health in patients with chronic hepatitis C: Baseline results from the TRACER study

    Get PDF
    BACKGROUND: Liver stiffness measurements can be used to assess liver fibrosis and can be acquired by transient elastography using FibroScan® and with Acoustic Radiation Force Impulse imaging. The study aimed to establish liver stiffness measurement scores using FibroScan® and Acoustic Radiation Force Impulse in a chronic hepatitis C cohort and to explore the correlation and agreement between the scores and the factors influencing agreement. METHODS: Patients had liver stiffness measurements acquired with FibroScan® (right lobe of liver) and Acoustic Radiation Force Impulse (right and left lobe of liver). We used Spearman’s correlation to explore the relationship between FibroScan® and Acoustic Radiation Force Impulse scores. A Bland–Altman plot was used to evaluate bias between the mean percentage differences of FibroScan® and Acoustic Radiation Force Impulse scores. Univariable and multivariable analyses were used to assess how factors such as body mass index, age and gender influenced the agreement between liver stiffness measurements. RESULTS: Bland-Altman showed the average (95% CI) percentage difference between FibroScan® and Acoustic Radiation Force Impulse scores was 27.5% (17.8, 37.2), p < 0.001. There was a negative correlation between the average and percentage difference of the FibroScan® and Acoustic Radiation Force Impulse scores (r (95% CI) = −0.41 (−0.57, −0.21), p < 0.001), thus showing that percentage difference gets smaller for greater FibroScan® and Acoustic Radiation Force Impulse scores. Body mass index was the biggest influencing factor on differences between FibroScan® and Acoustic Radiation Force Impulse (r = 0.12 (0.01, 0.23), p = 0.05). Acoustic Radiation Force Impulse scores at segment 5/8 and the left lobe showed good correlation (r (95% CI) = 0.83 (0.75, 0.89), p < 0.001). CONCLUSION: FibroScan® and Acoustic Radiation Force Impulse had similar predictive values for the assessment of liver stiffness in patients with chronic hepatitis C infection; however, the level of agreement varied across lower and higher scores
    corecore