66 research outputs found
ReStore: Reusing Results of MapReduce Jobs
Analyzing large scale data has emerged as an important activity for many
organizations in the past few years. This large scale data analysis is
facilitated by the MapReduce programming and execution model and its
implementations, most notably Hadoop. Users of MapReduce often have analysis
tasks that are too complex to express as individual MapReduce jobs. Instead,
they use high-level query languages such as Pig, Hive, or Jaql to express their
complex tasks. The compilers of these languages translate queries into
workflows of MapReduce jobs. Each job in these workflows reads its input from
the distributed file system used by the MapReduce system and produces output
that is stored in this distributed file system and read as input by the next
job in the workflow. The current practice is to delete these intermediate
results from the distributed file system at the end of executing the workflow.
One way to improve the performance of workflows of MapReduce jobs is to keep
these intermediate results and reuse them for future workflows submitted to the
system. In this paper, we present ReStore, a system that manages the storage
and reuse of such intermediate results. ReStore can reuse the output of whole
MapReduce jobs that are part of a workflow, and it can also create additional
reuse opportunities by materializing and storing the output of query execution
operators that are executed within a MapReduce job. We have implemented ReStore
as an extension to the Pig dataflow system on top of Hadoop, and we
experimentally demonstrate significant speedups on queries from the PigMix
benchmark.Comment: VLDB201
Enhancing Computation Pushdown for Cloud OLAP Databases
Network is a major bottleneck in modern cloud databases that adopt a
storage-disaggregation architecture. Computation pushdown is a promising
solution to tackle this issue, which offloads some computation tasks to the
storage layer to reduce network traffic. Existing cloud OLAP systems statically
decide whether to push down computation during the query optimization phase and
do not consider the storage layer's computational capacity and load. Besides,
there is a lack of a general principle that determines which operators are
amenable for pushdown. Existing systems design and implement pushdown features
empirically, which ends up picking a limited set of pushdown operators
respectively.
In this paper, we first design Adaptive pushdown as a new mechanism to avoid
throttling the storage-layer computation during pushdown, which pushes the
request back to the computation layer at runtime if the storage-layer
computational resource is insufficient. Moreover, we derive a general principle
to identify pushdown-amenable computational tasks, by summarizing common
patterns of pushdown capabilities in existing systems. We propose two new
pushdown operators, namely, selection bitmap and distributed data shuffle.
Evaluation results on TPC-H show that Adaptive pushdown can achieve up to 1.9x
speedup over both No pushdown and Eager pushdown baselines, and the new
pushdown operators can further accelerate query execution by up to 3.0x.Comment: 13 pages, 15 figure
- …