1 research outputs found
A performance comparison of Dask and Apache Spark for data-intensive neuroimaging pipelines
In the past few years, neuroimaging has entered the Big Data era due to the
joint increase in image resolution, data sharing, and study sizes. However, no
particular Big Data engines have emerged in this field, and several
alternatives remain available. We compare two popular Big Data engines with
Python APIs, Apache Spark and Dask, for their runtime performance in processing
neuroimaging pipelines. Our evaluation uses two synthetic pipelines processing
the 81GB BigBrain image, and a real pipeline processing anatomical data from
more than 1,000 subjects. We benchmark these pipelines using various
combinations of task durations, data sizes, and numbers of workers, deployed on
an 8-node (8 cores ea.) compute cluster in Compute Canada's Arbutus cloud. We
evaluate PySpark's RDD API against Dask's Bag, Delayed and Futures. Results
show that despite slight differences between Spark and Dask, both engines
perform comparably. However, Dask pipelines risk being limited by Python's GIL
depending on task type and cluster configuration. In all cases, the major
limiting factor was data transfer. While either engine is suitable for
neuroimaging pipelines, more effort needs to be placed in reducing data
transfer time.Comment: 10 pages, 15 figures, 1 tables. To appear in the proceeding of the
14th WORKS Workshop on Topics in Workflows in Support of Large-Scale Science,
17 November 2019, Denver, CO, US