4 research outputs found
Towards Efficient and Scalable Acceleration of Online Decision Tree Learning on FPGA
Decision trees are machine learning models commonly used in various
application scenarios. In the era of big data, traditional decision tree
induction algorithms are not suitable for learning large-scale datasets due to
their stringent data storage requirement. Online decision tree learning
algorithms have been devised to tackle this problem by concurrently training
with incoming samples and providing inference results. However, even the most
up-to-date online tree learning algorithms still suffer from either high memory
usage or high computational intensity with dependency and long latency, making
them challenging to implement in hardware. To overcome these difficulties, we
introduce a new quantile-based algorithm to improve the induction of the
Hoeffding tree, one of the state-of-the-art online learning models. The
proposed algorithm is light-weight in terms of both memory and computational
demand, while still maintaining high generalization ability. A series of
optimization techniques dedicated to the proposed algorithm have been
investigated from the hardware perspective, including coarse-grained and
fine-grained parallelism, dynamic and memory-based resource sharing, pipelining
with data forwarding. We further present a high-performance, hardware-efficient
and scalable online decision tree learning system on a field-programmable gate
array (FPGA) with system-level optimization techniques. Experimental results
show that our proposed algorithm outperforms the state-of-the-art Hoeffding
tree learning method, leading to 0.05% to 12.3% improvement in inference
accuracy. Real implementation of the complete learning system on the FPGA
demonstrates a 384x to 1581x speedup in execution time over the
state-of-the-art design.Comment: appear as a conference paper in FCCM 201
DAMOV: A New Methodology and Benchmark Suite for Evaluating Data Movement Bottlenecks
Data movement between the CPU and main memory is a first-order obstacle
against improving performance, scalability, and energy efficiency in modern
systems. Computer systems employ a range of techniques to reduce overheads tied
to data movement, spanning from traditional mechanisms (e.g., deep multi-level
cache hierarchies, aggressive hardware prefetchers) to emerging techniques such
as Near-Data Processing (NDP), where some computation is moved close to memory.
Our goal is to methodically identify potential sources of data movement over a
broad set of applications and to comprehensively compare traditional
compute-centric data movement mitigation techniques to more memory-centric
techniques, thereby developing a rigorous understanding of the best techniques
to mitigate each source of data movement.
With this goal in mind, we perform the first large-scale characterization of
a wide variety of applications, across a wide range of application domains, to
identify fundamental program properties that lead to data movement to/from main
memory. We develop the first systematic methodology to classify applications
based on the sources contributing to data movement bottlenecks. From our
large-scale characterization of 77K functions across 345 applications, we
select 144 functions to form the first open-source benchmark suite (DAMOV) for
main memory data movement studies. We select a diverse range of functions that
(1) represent different types of data movement bottlenecks, and (2) come from a
wide range of application domains. Using NDP as a case study, we identify new
insights about the different data movement bottlenecks and use these insights
to determine the most suitable data movement mitigation mechanism for a
particular application. We open-source DAMOV and the complete source code for
our new characterization methodology at https://github.com/CMU-SAFARI/DAMOV.Comment: Our open source software is available at
https://github.com/CMU-SAFARI/DAMO