606 research outputs found

    Relating L-Resilience and Wait-Freedom via Hitting Sets

    Full text link
    The condition of t-resilience stipulates that an n-process program is only obliged to make progress when at least n-t processes are correct. Put another way, the live sets, the collection of process sets such that progress is required if all the processes in one of these sets are correct, are all sets with at least n-t processes. We show that the ability of arbitrary collection of live sets L to solve distributed tasks is tightly related to the minimum hitting set of L, a minimum cardinality subset of processes that has a non-empty intersection with every live set. Thus, finding the computing power of L is NP-complete. For the special case of colorless tasks that allow participating processes to adopt input or output values of each other, we use a simple simulation to show that a task can be solved L-resiliently if and only if it can be solved (h-1)-resiliently, where h is the size of the minimum hitting set of L. For general tasks, we characterize L-resilient solvability of tasks with respect to a limited notion of weak solvability: in every execution where all processes in some set in L are correct, outputs must be produced for every process in some (possibly different) participating set in L. Given a task T, we construct another task T_L such that T is solvable weakly L-resiliently if and only if T_L is solvable weakly wait-free

    Strong Equivalence Relations for Iterated Models

    Full text link
    The Iterated Immediate Snapshot model (IIS), due to its elegant geometrical representation, has become standard for applying topological reasoning to distributed computing. Its modular structure makes it easier to analyze than the more realistic (non-iterated) read-write Atomic-Snapshot memory model (AS). It is known that AS and IIS are equivalent with respect to \emph{wait-free task} computability: a distributed task is solvable in AS if and only if it solvable in IIS. We observe, however, that this equivalence is not sufficient in order to explore solvability of tasks in \emph{sub-models} of AS (i.e. proper subsets of its runs) or computability of \emph{long-lived} objects, and a stronger equivalence relation is needed. In this paper, we consider \emph{adversarial} sub-models of AS and IIS specified by the sets of processes that can be \emph{correct} in a model run. We show that AS and IIS are equivalent in a strong way: a (possibly long-lived) object is implementable in AS under a given adversary if and only if it is implementable in IIS under the same adversary. %This holds whether the object is one-shot or long-lived. Therefore, the computability of any object in shared memory under an adversarial AS scheduler can be equivalently investigated in IIS

    Admit your weakness: Verifying correctness on TSO architectures

    Get PDF
    “The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification

    The effect of exogenous glucose infusion on early embryonic development in lactating dairy cows

    Get PDF
    peer-reviewedThe objective of this study was to examine the effect of intravenous infusion of glucose on early embryonic development in lactating dairy cows. Nonpregnant, lactating dairy cows (n = 12) were enrolled in the study (276 ± 17 d in milk). On d 7 after a synchronized estrus, cows were randomly assigned to receive an intravenous infusion of either 750 g/d of exogenous glucose (GLUC; 78 mL/h of 40% glucose wt/vol) or saline (CTRL; 78 mL/h of 0.9% saline solution). The infusion period lasted 7 d and cows were confined to metabolism stalls for the duration of the study. Coincident with the commencement of the infusion on d 7 after estrus, 15 in vitro-produced grade 1 blastocysts were transferred into the uterine horn ipsilateral to the corpus luteum. All animals were slaughtered on d 14 to recover conceptuses, uterine fluid, and endometrial tissue. Glucose infusion increased circulating glucose concentrations (4.70 ± 0.12 vs. 4.15 ± 0.12 mmol/L) but did not affect milk production or dry matter intake. Circulating β-hydroxybutyrate concentrations were decreased (0.51 ± 0.01 vs. 0.70 ± 0.01 mmol/L for GLUC vs. CTRL, respectively) but plasma fatty acids, progesterone, and insulin concentrations were unaffected by treatment. Treatment did not affect either uterine lumen fluid glucose concentration or the mRNA abundance of specific glucose transporters in the endometrium. Mean conceptus length, width, and area on d 14 were reduced in the GLUC treatment compared with the CTRL treatment. A greater proportion of embryos in the CTRL group had elongated to all length cut-off measurements between 11 and 20 mm (measured in 1-mm increments) compared with the GLUC treatment. In conclusion, infusion of glucose into lactating dairy cows from d 7 to d 14 post-estrus during the critical period of conceptus elongation had an adverse impact on early embryonic development

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    Constraining Absolute Plate Motions Since the Triassic

    Get PDF
    The absolute motion of tectonic plates since Pangea can be derived from observations of hotspot trails, paleomagnetism, or seismic tomography. However, fitting observations is typically carried out in isolation without consideration for the fit to unused data or whether the resulting plate motions are geodynamically plausible. Through the joint evaluation of global hotspot track observations (for times <80 Ma), first‐order estimates of net lithospheric rotation (NLR), and parameter estimation for paleo–trench migration (TM), we present a suite of geodynamically consistent, data‐optimized global absolute reference frames from 220 Ma to the present. Each absolute plate motion (APM) model was evaluated against six published APM models, together incorporating the full range of primary data constraints. Model performance for published and new models was quantified through a standard statistical analyses using three key diagnostic global metrics: root‐mean square plate velocities, NLR characteristics, and TM behavior. Additionally, models were assessed for consistency with published global paleomagnetic data and for ages <80 Ma for predicted relative hotspot motion, track geometry, and time dependence. Optimized APM models demonstrated significantly improved global fit with geological and geophysical observations while performing consistently with geodynamic constraints. Critically, APM models derived by limiting average rates of NLR to ~0.05°/Myr and absolute TM velocities to ~27‐mm/year fit geological observations including hotspot tracks. This suggests that this range of NLR and TM estimates may be appropriate for Earth over the last 220 Myr, providing a key step toward the practical integration of numerical geodynamics into plate tectonic reconstructions

    On Correctness of Data Structures under Reads-Write Concurrency

    Get PDF
    Abstract. We study the correctness of shared data structures under reads-write concurrency. A popular approach to ensuring correctness of read-only operations in the presence of concurrent update, is read-set validation, which checks that all read variables have not changed since they were first read. In practice, this approach is often too conserva-tive, which adversely affects performance. In this paper, we introduce a new framework for reasoning about correctness of data structures under reads-write concurrency, which replaces validation of the entire read-set with more general criteria. Namely, instead of verifying that all read conditions over the shared variables, which we call base conditions. We show that reading values that satisfy some base condition at every point in time implies correctness of read-only operations executing in parallel with updates. Somewhat surprisingly, the resulting correctness guarantee is not equivalent to linearizability, and is instead captured through two new conditions: validity and regularity. Roughly speaking, the former re-quires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regular-ity for arbitrary data structures, and is weaker than linearizability. We further extend our framework to capture also linearizability. We illus-trate how our framework can be applied for reasoning about correctness of a variety of implementations of data structures such as linked lists.

    A comparison of transient elastography with acoustic radiation force impulse elastography for the assessment of liver health in patients with chronic hepatitis C: Baseline results from the TRACER study

    Get PDF
    BACKGROUND: Liver stiffness measurements can be used to assess liver fibrosis and can be acquired by transient elastography using FibroScan® and with Acoustic Radiation Force Impulse imaging. The study aimed to establish liver stiffness measurement scores using FibroScan® and Acoustic Radiation Force Impulse in a chronic hepatitis C cohort and to explore the correlation and agreement between the scores and the factors influencing agreement. METHODS: Patients had liver stiffness measurements acquired with FibroScan® (right lobe of liver) and Acoustic Radiation Force Impulse (right and left lobe of liver). We used Spearman’s correlation to explore the relationship between FibroScan® and Acoustic Radiation Force Impulse scores. A Bland–Altman plot was used to evaluate bias between the mean percentage differences of FibroScan® and Acoustic Radiation Force Impulse scores. Univariable and multivariable analyses were used to assess how factors such as body mass index, age and gender influenced the agreement between liver stiffness measurements. RESULTS: Bland-Altman showed the average (95% CI) percentage difference between FibroScan® and Acoustic Radiation Force Impulse scores was 27.5% (17.8, 37.2), p < 0.001. There was a negative correlation between the average and percentage difference of the FibroScan® and Acoustic Radiation Force Impulse scores (r (95% CI) = −0.41 (−0.57, −0.21), p < 0.001), thus showing that percentage difference gets smaller for greater FibroScan® and Acoustic Radiation Force Impulse scores. Body mass index was the biggest influencing factor on differences between FibroScan® and Acoustic Radiation Force Impulse (r = 0.12 (0.01, 0.23), p = 0.05). Acoustic Radiation Force Impulse scores at segment 5/8 and the left lobe showed good correlation (r (95% CI) = 0.83 (0.75, 0.89), p < 0.001). CONCLUSION: FibroScan® and Acoustic Radiation Force Impulse had similar predictive values for the assessment of liver stiffness in patients with chronic hepatitis C infection; however, the level of agreement varied across lower and higher scores

    New Mandates and Imperatives in the Revised ACA Code of Ethics

    Get PDF
    The first major revision of the ACA Code of Ethics in a decade occurred in late 2005, with the updated edition containing important new mandates and imperatives. This article provides interviews with members of the Ethics Revision Task Force that flesh out seminal changes in the revised ACA Code of Ethics in the areas of confidentiality, romantic and sexual interactions, dual relationships, end-of-life care for terminally ill clients, cultural sensitivity, diagnosis, interventions, practice termination, technology, and deceased clients

    On the expressiveness and trade-offs of large scale tuple stores

    Get PDF
    Proceedings of On the Move to Meaningful Internet Systems (OTM)Massive-scale distributed computing is a challenge at our doorstep. The current exponential growth of data calls for massive-scale capabilities of storage and processing. This is being acknowledged by several major Internet players embracing the cloud computing model and offering first generation distributed tuple stores. Having all started from similar requirements, these systems ended up providing a similar service: A simple tuple store interface, that allows applications to insert, query, and remove individual elements. Further- more, while availability is commonly assumed to be sustained by the massive scale itself, data consistency and freshness is usually severely hindered. By doing so, these services focus on a specific narrow trade-off between consistency, availability, performance, scale, and migration cost, that is much less attractive to common business needs. In this paper we introduce DataDroplets, a novel tuple store that shifts the current trade-off towards the needs of common business users, pro- viding additional consistency guarantees and higher level data process- ing primitives smoothing the migration path for existing applications. We present a detailed comparison between DataDroplets and existing systems regarding their data model, architecture and trade-offs. Prelim- inary results of the system's performance under a realistic workload are also presented
    corecore