4,938 research outputs found
Large Scale Parallel Computations in R through Elemental
Even though in recent years the scale of statistical analysis problems has
increased tremendously, many statistical software tools are still limited to
single-node computations. However, statistical analyses are largely based on
dense linear algebra operations, which have been deeply studied, optimized and
parallelized in the high-performance-computing community. To make
high-performance distributed computations available for statistical analysis,
and thus enable large scale statistical computations, we introduce RElem, an
open source package that integrates the distributed dense linear algebra
library Elemental into R. While on the one hand, RElem provides direct wrappers
of Elemental's routines, on the other hand, it overloads various operators and
functions to provide an entirely native R experience for distributed
computations. We showcase how simple it is to port existing R programs to Relem
and demonstrate that Relem indeed allows to scale beyond the single-node
limitation of R with the full performance of Elemental without any overhead.Comment: 16 pages, 5 figure
Deductive Verification of Parallel Programs Using Why3
The Message Passing Interface specification (MPI) defines a portable
message-passing API used to program parallel computers. MPI programs manifest a
number of challenges on what concerns correctness: sent and expected values in
communications may not match, resulting in incorrect computations possibly
leading to crashes; and programs may deadlock resulting in wasted resources.
Existing tools are not completely satisfactory: model-checking does not scale
with the number of processes; testing techniques wastes resources and are
highly dependent on the quality of the test set.
As an alternative, we present a prototype for a type-based approach to
programming and verifying MPI like programs against protocols. Protocols are
written in a dependent type language designed so as to capture the most common
primitives in MPI, incorporating, in addition, a form of primitive recursion
and collective choice. Protocols are then translated into Why3, a deductive
software verification tool. Source code, in turn, is written in WhyML, the
language of the Why3 platform, and checked against the protocol. Programs that
pass verification are guaranteed to be communication safe and free from
deadlocks.
We verified several parallel programs from textbooks using our approach, and
report on the outcome.Comment: In Proceedings ICE 2015, arXiv:1508.0459
Implementation of novel methods of global and nonsmooth optimization : GANSO programming library
We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages. <br /
Learning from the Success of MPI
The Message Passing Interface (MPI) has been extremely successful as a
portable way to program high-performance parallel computers. This success has
occurred in spite of the view of many that message passing is difficult and
that other approaches, including automatic parallelization and directive-based
parallelism, are easier to use. This paper argues that MPI has succeeded
because it addresses all of the important issues in providing a parallel
programming model.Comment: 12 pages, 1 figur
- …