56,024 research outputs found
Randomized Two-Process Wait-Free Test-and-Set
We present the first explicit, and currently simplest, randomized algorithm
for 2-process wait-free test-and-set. It is implemented with two 4-valued
single writer single reader atomic variables. A test-and-set takes at most 11
expected elementary steps, while a reset takes exactly 1 elementary step. Based
on a finite-state analysis, the proofs of correctness and expected length are
compressed into one table.Comment: 9 pages, 4 figures, LaTeX source; Submitte
Randomized protocols for asynchronous consensus
The famous Fischer, Lynch, and Paterson impossibility proof shows that it is
impossible to solve the consensus problem in a natural model of an asynchronous
distributed system if even a single process can fail. Since its publication,
two decades of work on fault-tolerant asynchronous consensus algorithms have
evaded this impossibility result by using extended models that provide (a)
randomization, (b) additional timing assumptions, (c) failure detectors, or (d)
stronger synchronization mechanisms than are available in the basic model.
Concentrating on the first of these approaches, we illustrate the history and
structure of randomized asynchronous consensus protocols by giving detailed
descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of
Distributed Computin
A Complexity-Based Hierarchy for Multiprocessor Synchronization
For many years, Herlihy's elegant computability based Consensus Hierarchy has
been our best explanation of the relative power of various types of
multiprocessor synchronization objects when used in deterministic algorithms.
However, key to this hierarchy is treating synchronization instructions as
distinct objects, an approach that is far from the real-world, where
multiprocessor programs apply synchronization instructions to collections of
arbitrary memory locations. We were surprised to realize that, when considering
instructions applied to memory locations, the computability based hierarchy
collapses. This leaves open the question of how to better capture the power of
various synchronization instructions.
In this paper, we provide an approach to answering this question. We present
a hierarchy of synchronization instructions, classified by their space
complexity in solving obstruction-free consensus. Our hierarchy provides a
classification of combinations of known instructions that seems to fit with our
intuition of how useful some are in practice, while questioning the
effectiveness of others. We prove an essentially tight characterization of the
power of buffered read and write instructions.Interestingly, we show a similar
result for multi-location atomic assignments
On the Optimal Space Complexity of Consensus for Anonymous Processes
The optimal space complexity of consensus in shared memory is a decades-old
open problem. For a system of processes, no algorithm is known that uses a
sublinear number of registers. However, the best known lower bound due to Fich,
Herlihy, and Shavit requires registers.
The special symmetric case of the problem where processes are anonymous (run
the same algorithm) has also attracted attention. Even in this case, the best
lower and upper bounds are still and . Moreover, Fich,
Herlihy, and Shavit first proved their lower bound for anonymous processes, and
then extended it to the general case. As such, resolving the anonymous case
might be a significant step towards understanding and solving the general
problem.
In this work, we show that in a system of anonymous processes, any consensus
algorithm satisfying nondeterministic solo termination has to use
read-write registers in some execution. This implies an lower bound
on the space complexity of deterministic obstruction-free and randomized
wait-free consensus, matching the upper bound and closing the symmetric case of
the open problem
Consensus with Max Registers
We consider the problem of implementing randomized wait-free consensus from max registers under the assumption of an oblivious adversary. We show that max registers solve m-valued consensus for arbitrary m in expected O(log^* n) steps per process, beating the Omega(log m/log log m) lower bound for ordinary registers when m is large and the best previously known O(log log n) upper bound when m is small. A simple max-register implementation based on double-collect snapshots translates this result into an O(n log n) expected step implementation of m-valued consensus from n single-writer registers, improving on the best previously-known bound of O(n log^2 n) for single-writer registers
Recommended from our members
Complete recovery from anxiety disorders following Cognitive Behavioural Therapy in children and adolescents: a meta analysis
Cognitive Behavior Therapy (CBT) is a well-established treatment for childhood anxiety disorders. Meta-analyses have concluded that approximately 60% of children recover following treatment, however these include studies using a broad range of diagnostic indices to assess outcomes including whether children are free of the one anxiety disorder that causes most interference (i.e. the primary anxiety disorder) or whether children are free of all anxiety disorders. We conducted a meta-analysis to establish the efficacy of CBT in terms of absence of all anxiety disorders. Where available we compared this rate to outcomes based on absence of primary disorder. Of 56 published randomized controlled trials, 19 provided data on recovery from all anxiety disorders (n = 635 CBT, n = 450 control participants). There was significant heterogeneity across those studies with available data and full recovery rates varied from 47.6 to 66.4% among children without autistic spectrum conditions (ASC) and 12.2 to 36.7% for children with ASC following treatment, compared to up to 20.6% and 21.3% recovery in waitlist and active treatment comparisons. The lack of consistency in diagnostic outcomes highlights the urgent need for consensus on reporting in future RCTs of childhood anxiety disorders for the meaningful synthesis of data going forwards
On the Importance of Registers for Computability
All consensus hierarchies in the literature assume that we have, in addition
to copies of a given object, an unbounded number of registers. But why do we
really need these registers?
This paper considers what would happen if one attempts to solve consensus
using various objects but without any registers. We show that under a
reasonable assumption, objects like queues and stacks cannot emulate the
missing registers. We also show that, perhaps surprisingly, initialization,
shown to have no computational consequences when registers are readily
available, is crucial in determining the synchronization power of objects when
no registers are allowed. Finally, we show that without registers, the number
of available objects affects the level of consensus that can be solved.
Our work thus raises the question of whether consensus hierarchies which
assume an unbounded number of registers truly capture synchronization power,
and begins a line of research aimed at better understanding the interaction
between read-write memory and the powerful synchronization operations available
on modern architectures.Comment: 12 pages, 0 figure
Recommended from our members
An electronic family health history tool to identify and manage patients at increased risk for colorectal cancer: protocol for a randomized controlled trial.
BackgroundColorectal cancer is the fourth most commonly diagnosed cancer in the United States. Approximately 3-10% of the population has an increased risk for colorectal cancer due to family history and warrants more frequent or intensive screening. Yet, < 50% of that high-risk population receives guideline-concordant care. Systematic collection of family health history and decision support may improve guideline-concordant screening for patients at increased risk of colorectal cancer. We seek to test the effectiveness of a web-based, systematic family health history collection tool and decision support platform (MeTree) to improve risk assessment and appropriate management of colorectal cancer risk among patients in the Department of Veterans Affairs primary care practices.MethodsIn this ongoing randomized controlled trial, primary care providers at the Durham Veterans Affairs Health Care System and the Madison VA Medical Center are randomized to immediate intervention or wait-list control. Veterans are eligible if assigned to enrolled providers, have an upcoming primary care appointment, and have no conditions that would place them at increased risk for colorectal cancer (such as personal history, adenomatous polyps, or inflammatory bowel disease). Those with a recent lower endoscopy (e.g. colonoscopy, sigmoidoscopy) are excluded. Immediate intervention patients put their family health history information into a web-based platform, MeTree, which provides both patient- and provider-facing decision support reports. Wait-list control patients access MeTree 12 months post-consent. The primary outcome is the risk-concordant colorectal cancer screening referral rate obtained via chart review. Secondary outcomes include patient completion of risk management recommendations (e.g. colonoscopy) and referral for genetic consultation. We will also conduct an economic analysis and an assessment of providers' experience with MeTree clinical decision support recommendations to inform future implementation efforts if the intervention is found to be effective.DiscussionThis trial will assess the feasibility and effectiveness of patient-collected family health history linked to decision support to promote risk-appropriate screening in a large healthcare system such as the Department of Veterans Affairs.Trial registrationClinicalTrials.gov, NCT02247336 . Registered on 25 September 2014
A Posteriori Probabilistic Bounds of Convex Scenario Programs with Validation Tests
Scenario programs have established themselves as efficient tools towards
decision-making under uncertainty. To assess the quality of scenario-based
solutions a posteriori, validation tests based on Bernoulli trials have been
widely adopted in practice. However, to reach a theoretically reliable
judgement of risk, one typically needs to collect massive validation samples.
In this work, we propose new a posteriori bounds for convex scenario programs
with validation tests, which are dependent on both realizations of support
constraints and performance on out-of-sample validation data. The proposed
bounds enjoy wide generality in that many existing theoretical results can be
incorporated as particular cases. To facilitate practical use, a systematic
approach for parameterizing a posteriori probability bounds is also developed,
which is shown to possess a variety of desirable properties allowing for easy
implementations and clear interpretations. By synthesizing comprehensive
information about support constraints and validation tests, improved risk
evaluation can be achieved for randomized solutions in comparison with existing
a posteriori bounds. Case studies on controller design of aircraft lateral
motion are presented to validate the effectiveness of the proposed a posteriori
bounds
- …