2 research outputs found

    Low Overhead Scalable Sorting for DNA Analysis using Apache Arrow Flight

    No full text
    Variant calling is a classic example of DNA data analysis, in which variants from DNA sequence data are identified. In recent years, a number of frameworks have been developed for implementing the variant calling pipeline. As a result, deeper scientific insights have been gained into genomics. From the computational point of view, however, these frameworks suffer from either excessive serialization and deserialization or lack of scalability. This work proposes using Apache Arrow as the unified in-memory data format and employing its associated communication protocol Flight to enable horizontal scalability for the sorting stage of a variant calling pipeline. Compared to other protocols, Arrow Flight features high throughput, low overhead, and compatibility with the Arrow format. For example, 43\% of the data shuffling time in a standalone distributed sorting application can be saved by replacing the original protocol with Arrow Flight. In addition, several techniques, including parallelization, hashing the keys for sorting to integers, and leveraging zero-copy conversion are proven to be useful for performance optimization under certain conditions. They together can reduce the execution time of the distributed sorting application from 71 seconds to only 12 seconds, achieving a 5.9x speedup. Linear scalability is also observed when the input data consists of integers keys. After integrating this sorting application into the variant calling pipeline, the performance of sorting in the pipeline outperforms the counterpart step in the state-of-the-art SparkGA2. Concretely, a speedup of 21x is achieved in the case of using 4 nodes and 10x is achieved in the case of using 12 nodes

    Communication-Efficient Cluster Scalable Genomics Data Processing Using Apache Arrow Flight

    No full text
    Current cluster scaled genomics data processing solutions rely on big data frameworks like Apache Spark, Hadoop and HDFS for data scheduling, processing and storage. These frameworks come with additional computation and memory overheads by default. It has been observed that scaling genomics dataset processing beyond 32 nodes is not efficient on such frameworks.To overcome the inefficiencies of big data frameworks for processing genomics data on clusters, we introduce a low-overhead and highly scalable solution on a SLURM based HPC batch system. This solution uses Apache Arrow as in-memory columnar data format to store genomics data efficiently and Arrow Flight as a network protocol to move and schedule this data across the HPC nodes with low communication overhead.As a use case, we use NGS short reads DNA sequencing data for pre-processing and variant calling applications. This solution outperforms existing Apache Spark based big data solutions in term of both computation time (2x) and lower communication overhead (more than 20-60% depending on cluster size). Our solution has similar performance to MPI-based HPC solutions, with the added advantage of easy programmability and transparent big data scalability. The whole solution is Python and shell script based, which makes it flexible to update and integrate alternative variant callers. Our solution is publicly available on GitHub at https://github.com/abs-tudelft/time-to-fly-high/tree/main/genomicsGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Computer Engineerin
    corecore