919 research outputs found
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
The difficulty of developing reliable parallel software is generating
interest in deterministic environments, where a given program and input can
yield only one possible result. Languages or type systems can enforce
determinism in new code, and runtime systems can impose synthetic schedules on
legacy parallel code. To parallelize existing serial code, however, we would
like a programming model that is naturally deterministic without language
restrictions or artificial scheduling. We propose "deterministic consistency",
a parallel programming model as easy to understand as the "parallel assignment"
construct in sequential languages such as Perl and JavaScript, where concurrent
threads always read their inputs before writing shared outputs. DC supports
common data- and task-parallel synchronization abstractions such as fork/join
and barriers, as well as non-hierarchical structures such as producer/consumer
pipelines and futures. A preliminary prototype suggests that software-only
implementations of DC can run applications written for popular parallel
environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure
Multiprocess parallel antithetic coupling for backward and forward Markov Chain Monte Carlo
Antithetic coupling is a general stratification strategy for reducing Monte
Carlo variance without increasing the simulation size. The use of the
antithetic principle in the Monte Carlo literature typically employs two strata
via antithetic quantile coupling. We demonstrate here that further
stratification, obtained by using k>2 (e.g., k=3-10) antithetically coupled
variates, can offer substantial additional gain in Monte Carlo efficiency, in
terms of both variance and bias. The reason for reduced bias is that
antithetically coupled chains can provide a more dispersed search of the state
space than multiple independent chains. The emerging area of perfect simulation
provides a perfect setting for implementing the k-process parallel antithetic
coupling for MCMC because, without antithetic coupling, this class of methods
delivers genuine independent draws. Furthermore, antithetic backward coupling
provides a very convenient theoretical tool for investigating antithetic
forward coupling. However, the generation of k>2 antithetic variates that are
negatively associated, that is, they preserve negative correlation under
monotone transformations, and extremely antithetic, that is, they are as
negatively correlated as possible, is more complicated compared to the case
with k=2. In this paper, we establish a theoretical framework for investigating
such issues. Among the generating methods that we compare, Latin hypercube
sampling and its iterative extension appear to be general-purpose choices,
making another direct link between Monte Carlo and quasi Monte Carlo.Comment: Published at http://dx.doi.org/10.1214/009053604000001075 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Fault tolerant architectures for integrated aircraft electronics systems
Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered
08371 Abstracts Collection -- Fault-Tolerant Distributed Algorithms on VLSI Chips
From September the , 2008 to September the
, 2008 the Dagstuhl Seminar 08371 ``Fault-Tolerant
Distributed Algorithms on VLSI Chips \u27\u27 was held in Schloss
Dagstuhl~--~Leibniz Center for Informatics. The seminar was devoted to
exploring whether the wealth of existing fault-tolerant distributed
algorithms research can be utilized for meeting the challenges of
future-generation VLSI chips. During the seminar, several participants
from both the VLSI and distributed algorithms\u27 discipline, presented
their current research, and ongoing work and possibilities for
collaboration were discussed. Abstracts of the presentations given
during the seminar as well as abstracts of seminar results and ideas
are put together in this paper. The first section describes the
seminar topics and goals in general. Links to extended abstracts or
full papers are provided, if available
Enabling distributed analysis for ALICE Run 3
The ALICE Collaboration has just finished a major detector upgrade that
increases the data-taking rate capability by two orders of magnitude and will
allow to collect unprecedented data samples. For example, the analysis input
for 1 month of Pb-Pb collisions amounts to about 5 PB. In order to enable
analysis on such large data samples, the ALICE distributed infrastructure was
revised and dedicated tools for Run 3 analysis were created. These are firstly
the analysis framework that builds on a multi-process
architecture exchanging a flat data format through shared memory implemented in
C++. Secondly, the Hyperloop train system for distributed analysis on the Grid
and on dedicated analysis facilities implemented in Java/Javascript/React.
These systems have been commissioned with converted Run 2 data and with the
recent LHC pilot beam and are ready for data analysis for the start of Run 3.
This contribution discusses the requirements and the used concepts, providing
details on the actual implementation. The status of the operation in particular
with respect to the LHC pilot beam will also be discussed.Comment: Contribution to the proceedings of the 41st International Conference
on High Energy Physics (ICHEP2022), 6-13 July 2022, Bologna, Italy. Contains:
6 pages, 7 figure
- …