35,471 research outputs found
Fast Genuine Generalized Consensus
International audienceConsensus (agreeing on a sequence of commands) is central to the operation and performance of distributed systems. A well-known solution to consensus is Fast Paxos. In a recent paper, Lamport enhances Fast Paxos by leveraging the commutativity of concurrent commands. The new primitive, called Generalized Paxos, reduces the collision rate, and thus the latency of Fast Paxos. However if a collision occurs, Generalized Paxos needs four communication steps to recover, which is slower than Fast Paxos. This paper presents FGGC, a novel consensus algorithm that reduces recovery delay when a collision occurs to one. FGGC tolerates f < n/2 replicas crashes, and during failure-free runs, processes learn commands in two steps if all commands commute, and three steps otherwise; this is optimal. Moreover, as long as no fault occurs, FGGC needs only f + 1 replicas to progress
MDCC: Multi-Data Center Consistency
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency
Loopholes in Bell Inequality Tests of Local Realism
Bell inequalities are intended to show that local realist theories cannot
describe the world. A local realist theory is one where physical properties are
defined prior to and independent of measurement, and no physical influence can
propagate faster than the speed of light. Quantum-mechanical predictions for
certain experiments violate the Bell inequality while a local realist theory
cannot, and this shows that a local realist theory cannot give those
quantum-mechanical predictions. However, because of unexpected circumstances or
"loopholes" in available experiment tests, local realist theories can reproduce
the data from these experiments. This paper reviews such loopholes, what effect
they have on Bell inequality tests, and how to avoid them in experiment.
Avoiding all these simultaneously in one experiment, usually called a
"loophole-free" or "definitive" Bell test, remains an open task, but is very
important for technological tasks such as device-independent security of
quantum cryptography, and ultimately for our understanding of the world.Comment: 42 pages, 2 figure
- …