2 research outputs found
Scalable Data Cube Analysis over Big Data
Data cubes are widely used as a powerful tool to provide multidimensional
views in data warehousing and On-Line Analytical Processing (OLAP). However,
with increasing data sizes, it is becoming computationally expensive to perform
data cube analysis. The problem is exacerbated by the demand of supporting more
complicated aggregate functions (e.g. CORRELATION, Statistical Analysis) as
well as supporting frequent view updates in data cubes. This calls for new
scalable and efficient data cube analysis systems. In this paper, we introduce
HaCube, an extension of MapReduce, designed for efficient parallel data cube
analysis on large-scale data by taking advantages from both MapReduce (in terms
of scalability) and parallel DBMS (in terms of efficiency). We also provide a
general data cube materialization algorithm which is able to facilitate the
features in MapReduce-like systems towards an efficient data cube computation.
Furthermore, we demonstrate how HaCube supports view maintenance through either
incremental computation (e.g. used for SUM or COUNT) or recomputation (e.g.
used for MEDIAN or CORRELATION). We implement HaCube by extending Hadoop and
evaluate it based on the TPC-D benchmark over billions of tuples on a cluster
with over 320 cores. The experimental results demonstrate the efficiency,
scalability and practicality of HaCube for cube analysis over a large amount of
data in a distributed environment
Computing Marginals Using MapReduce
We consider the problem of computing the data-cube marginals of a fixed order
(i.e., all marginals that aggregate over dimensions), using a single
round of MapReduce. The focus is on the relationship between the reducer size
(number of inputs allowed at a single reducer) and the replication rate (number
of reducers to which an input is sent). We show that the replication rate is
minimized when the reducers receive all the inputs necessary to compute one
marginal of higher order. That observation lets us view the problem as one of
covering sets of dimensions with sets of a larger size , a problem that
has been studied under the name "covering numbers." We offer a number of
constructions that, for different values of and meet or come close to
yielding the minimum possible replication rate for a given reducer size