372,267 research outputs found
Optimal, scalable forward models for computing gravity anomalies
We describe three approaches for computing a gravity signal from a density
anomaly. The first approach consists of the classical "summation" technique,
whilst the remaining two methods solve the Poisson problem for the
gravitational potential using either a Finite Element (FE) discretization
employing a multilevel preconditioner, or a Green's function evaluated with the
Fast Multipole Method (FMM). The methods utilizing the PDE formulation
described here differ from previously published approaches used in gravity
modeling in that they are optimal, implying that both the memory and
computational time required scale linearly with respect to the number of
unknowns in the potential field. Additionally, all of the implementations
presented here are developed such that the computations can be performed in a
massively parallel, distributed memory computing environment. Through numerical
experiments, we compare the methods on the basis of their discretization error,
CPU time and parallel scalability. We demonstrate the parallel scalability of
all these techniques by running forward models with up to voxels on
1000's of cores.Comment: 38 pages, 13 figures; accepted by Geophysical Journal Internationa
Checkpointing algorithms and fault prediction
This paper deals with the impact of fault prediction techniques on
checkpointing strategies. We extend the classical first-order analysis of Young
and Daly in the presence of a fault prediction system, characterized by its
recall and its precision. In this framework, we provide an optimal algorithm to
decide when to take predictions into account, and we derive the optimal value
of the checkpointing period. These results allow to analytically assess the key
parameters that impact the performance of fault predictors at very large scale.Comment: Supported in part by ANR Rescue. Published in Journal of Parallel and
Distributed Computing. arXiv admin note: text overlap with arXiv:1207.693
Parallel and Distributed Machine Learning Algorithms for Scalable Big Data Analytics
This editorial is for the Special Issue of the journal Future Generation Computing Systems, consisting of the selected papers of the 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics (ParLearning 2017). In this editorial, we have given a high-level overview of the 4 papers contained in this special issue, along with references to some of the related works
Fast matrix multiplication techniques based on the Adleman-Lipton model
On distributed memory electronic computers, the implementation and
association of fast parallel matrix multiplication algorithms has yielded
astounding results and insights. In this discourse, we use the tools of
molecular biology to demonstrate the theoretical encoding of Strassen's fast
matrix multiplication algorithm with DNA based on an -moduli set in the
residue number system, thereby demonstrating the viability of computational
mathematics with DNA. As a result, a general scalable implementation of this
model in the DNA computing paradigm is presented and can be generalized to the
application of \emph{all} fast matrix multiplication algorithms on a DNA
computer. We also discuss the practical capabilities and issues of this
scalable implementation. Fast methods of matrix computations with DNA are
important because they also allow for the efficient implementation of other
algorithms (i.e. inversion, computing determinants, and graph theory) with DNA.Comment: To appear in the International Journal of Computer Engineering
Research. Minor changes made to make the preprint as similar as possible to
the published versio
- …