Safe Speculative Replication

Abstract

This article argues that commonly-studied techniques for speculative replication—such as prefetching or prepushing content to a location before it is requested there—are inherently unsafe: they pose unacceptable risks of catastrophic overload and they may introduce bugs into systems by weakening consistency guarantees. To address these problems, the article introduces SSR, a new, general architecture for Safe Speculative Replication. SSR specifies the mechanisms that control how data flows through the system and leaves as a policy choice the question of what data to replicate to what nodes. SSR’s mechanisms (1) separate invalidations, demand updates, and speculative updates into three logical flows, (2) use per-resource schedulers that prioritize invalidations and demand updates over speculative updates, and (3) use a novel scheduler at the receiver to integrate these three flows in a way that maximizes performance and availability while meeting specified consistency constraints. We demonstrate the SSR architecture via two extensive case studies and show that SSR makes speculative replication practical for widespread adoption by (1) enabling self-tuning speculative replication, (2) cleanly integratin

    Similar works

    Full text

    thumbnail-image

    Available Versions