2 research outputs found
C-Coll: Introducing Error-bounded Lossy Compression into MPI Collectives
With the ever-increasing computing power of supercomputers and the growing
scale of scientific applications, the efficiency of MPI collective
communications turns out to be a critical bottleneck in large-scale distributed
and parallel processing. Large message size in MPI collectives is a
particularly big concern because it may significantly delay the overall
parallel performance. To address this issue, prior research simply applies the
off-the-shelf fix-rate lossy compressors in the MPI collectives, leading to
suboptimal performance, limited generalizability, and unbounded errors. In this
paper, we propose a novel solution, called C-Coll, which leverages
error-bounded lossy compression to significantly reduce the message size,
resulting in a substantial reduction in communication cost. The key
contributions are three-fold. (1) We develop two general, optimized
lossy-compression-based frameworks for both types of MPI collectives
(collective data movement as well as collective computation), based on their
particular characteristics. Our framework not only reduces communication cost
but also preserves data accuracy. (2) We customize an optimized version based
on SZx, an ultra-fast error-bounded lossy compressor, which can meet the
specific needs of collective communication. (3) We integrate C-Coll into
multiple collectives, such as MPI_Allreduce, MPI_Scatter, and MPI_Bcast, and
perform a comprehensive evaluation based on real-world scientific datasets.
Experiments show that our solution outperforms the original MPI collectives as
well as multiple baselines and related efforts by 3.5-9.7X.Comment: 12 pages, 15 figures, 5 tables, submitted to SC '2
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Deep learning (DL) enables the development of computer models that are
capable of learning, visualizing, optimizing, refining, and predicting data. In
recent years, DL has been applied in a range of fields, including audio-visual
data processing, agriculture, transportation prediction, natural language,
biomedicine, disaster management, bioinformatics, drug design, genomics, face
recognition, and ecology. To explore the current state of deep learning, it is
necessary to investigate the latest developments and applications of deep
learning in these disciplines. However, the literature is lacking in exploring
the applications of deep learning in all potential sectors. This paper thus
extensively investigates the potential applications of deep learning across all
major fields of study as well as the associated benefits and challenges. As
evidenced in the literature, DL exhibits accuracy in prediction and analysis,
makes it a powerful computational tool, and has the ability to articulate
itself and optimize, making it effective in processing data with no prior
training. Given its independence from training data, deep learning necessitates
massive amounts of data for effective analysis and processing, much like data
volume. To handle the challenge of compiling huge amounts of medical,
scientific, healthcare, and environmental data for use in deep learning, gated
architectures like LSTMs and GRUs can be utilized. For multimodal learning,
shared neurons in the neural network for all activities and specialized neurons
for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table