320 research outputs found
Responsibility and blame: a structural-model approach
Causality is typically treated an all-or-nothing concept; either A is a cause
of B or it is not. We extend the definition of causality introduced by Halpern
and Pearl [2001] to take into account the degree of responsibility of A for B.
For example, if someone wins an election 11--0, then each person who votes for
him is less responsible for the victory than if he had won 6--5. We then define
a notion of degree of blame, which takes into account an agent's epistemic
state. Roughly speaking, the degree of blame of A for B is the expected degree
of responsibility of A for B, taken over the epistemic state of an agent
Multi-Shot Distributed Transaction Commit
Atomic Commit Problem (ACP) is a single-shot agreement problem similar to consensus, meant to model the properties of transaction commit protocols in fault-prone distributed systems. We argue that ACP is too restrictive to capture the complexities of modern transactional data stores, where commit protocols are integrated with concurrency control, and their executions for different transactions are interdependent. As an alternative, we introduce Transaction Certification Service (TCS), a new formal problem that captures safety guarantees of multi-shot transaction commit protocols with integrated concurrency control. TCS is parameterized by a certification function that can be instantiated to support common isolation levels, such as serializability and snapshot isolation. We then derive a provably correct crash-resilient protocol for implementing TCS through successive refinement. Our protocol achieves a better time complexity than mainstream approaches that layer two-phase commit on top of Paxos-style replication
Space Complexity of Fault-Tolerant Register Emulations
Driven by the rising popularity of cloud storage, the costs associated with
implementing reliable storage services from a collection of fault-prone servers
have recently become an actively studied question. The well-known ABD result
shows that an f-tolerant register can be emulated using a collection of 2f + 1
fault-prone servers each storing a single read-modify-write object type, which
is known to be optimal. In this paper we generalize this bound: we investigate
the inherent space complexity of emulating reliable multi-writer registers as a
fucntion of the type of the base objects exposed by the underlying servers, the
number of writers to the emulated register, the number of available servers,
and the failure threshold. We establish a sharp separation between registers,
and both max-registers (the base object types assumed by ABD) and CAS in terms
of the resources (i.e., the number of base objects of the respective types)
required to support the emulation; we show that no such separation exists
between max-registers and CAS. Our main technical contribution is lower and
upper bounds on the resources required in case the underlying base objects are
fault-prone read/write registers. We show that the number of required registers
is directly proportional to the number of writers and inversely proportional to
the number of servers.Comment: Conference version appears in Proceedings of PODC '1
- …