10,975 research outputs found
LERC: Coordinated Cache Management for Data-Parallel Systems
Memory caches are being aggressively used in today's data-parallel frameworks
such as Spark, Tez and Storm. By caching input and intermediate data in memory,
compute tasks can witness speedup by orders of magnitude. To maximize the
chance of in-memory data access, existing cache algorithms, be it recency- or
frequency-based, settle on cache hit ratio as the optimization objective.
However, unlike the conventional belief, we show in this paper that simply
pursuing a higher cache hit ratio of individual data blocks does not
necessarily translate into faster task completion in data-parallel
environments. A data-parallel task typically depends on multiple input data
blocks. Unless all of these blocks are cached in memory, no speedup will
result. To capture this all-or-nothing property, we propose a more relevant
metric, called effective cache hit ratio. Specifically, a cache hit of a data
block is said to be effective if it can speed up a compute task. In order to
optimize the effective cache hit ratio, we propose the Least Effective
Reference Count (LERC) policy that persists the dependent blocks of a compute
task as a whole in memory. We have implemented the LERC policy as a memory
manager in Spark and evaluated its performance through Amazon EC2 deployment.
Evaluation results demonstrate that LERC helps speed up data-parallel jobs by
up to 37% compared with the widely employed least-recently-used (LRU) policy
- …