41,590 research outputs found
SAT-Based Synthesis Methods for Safety Specs
Automatic synthesis of hardware components from declarative specifications is
an ambitious endeavor in computer aided design. Existing synthesis algorithms
are often implemented with Binary Decision Diagrams (BDDs), inheriting their
scalability limitations. Instead of BDDs, we propose several new methods to
synthesize finite-state systems from safety specifications using decision
procedures for the satisfiability of quantified and unquantified Boolean
formulas (SAT-, QBF- and EPR-solvers). The presented approaches are based on
computational learning, templates, or reduction to first-order logic. We also
present an efficient parallelization, and optimizations to utilize reachability
information and incremental solving. Finally, we compare all methods in an
extensive case study. Our new methods outperform BDDs and other existing work
on some classes of benchmarks, and our parallelization achieves a super-linear
speedup. This is an extended version of [5], featuring an additional appendix.Comment: Extended version of a paper at VMCAI'1
Parallelization Strategies for Density Matrix Renormalization Group Algorithms on Shared-Memory Systems
Shared-memory parallelization (SMP) strategies for density matrix
renormalization group (DMRG) algorithms enable the treatment of complex systems
in solid state physics. We present two different approaches by which
parallelization of the standard DMRG algorithm can be accomplished in an
efficient way. The methods are illustrated with DMRG calculations of the
two-dimensional Hubbard model and the one-dimensional Holstein-Hubbard model on
contemporary SMP architectures. The parallelized code shows good scalability up
to at least eight processors and allows us to solve problems which exceed the
capability of sequential DMRG calculations.Comment: 18 pages, 9 figure
Stashing And Parallelization Pentagons
Parallelization is an algebraic operation that lifts problems to sequences in
a natural way. Given a sequence as an instance of the parallelized problem,
another sequence is a solution of this problem if every component is
instance-wise a solution of the original problem. In the Weihrauch lattice
parallelization is a closure operator. Here we introduce a dual operation that
we call stashing and that also lifts problems to sequences, but such that only
some component has to be an instance-wise solution. In this case the solution
is stashed away in the sequence. This operation, if properly defined, induces
an interior operator in the Weihrauch lattice. We also study the action of the
monoid induced by stashing and parallelization on the Weihrauch lattice, and we
prove that it leads to at most five distinct degrees, which (in the maximal
case) are always organized in pentagons. We also introduce another closely
related interior operator in the Weihrauch lattice that replaces solutions of
problems by upper Turing cones that are strong enough to compute solutions. It
turns out that on parallelizable degrees this interior operator corresponds to
stashing. This implies that, somewhat surprisingly, all problems which are
simultaneously parallelizable and stashable have computability-theoretic
characterizations. Finally, we apply all these results in order to study the
recently introduced discontinuity problem, which appears as the bottom of a
number of natural stashing-parallelization pentagons. The discontinuity problem
is not only the stashing of several variants of the lesser limited principle of
omniscience, but it also parallelizes to the non-computability problem. This
supports the slogan that "non-computability is the parallelization of
discontinuity"
SKIRT: hybrid parallelization of radiative transfer simulations
We describe the design, implementation and performance of the new hybrid
parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which
has been used extensively for modeling the continuum radiation of dusty
astrophysical systems including late-type galaxies and dusty tori. The hybrid
scheme combines distributed memory parallelization, using the standard Message
Passing Interface (MPI) to communicate between processes, and shared memory
parallelization, providing multiple execution threads within each process to
avoid duplication of data structures. The synchronization between multiple
threads is accomplished through atomic operations without high-level locking
(also called lock-free programming). This improves the scaling behavior of the
code and substantially simplifies the implementation of the hybrid scheme. The
result is an extremely flexible solution that adjusts to the number of
available nodes, processors and memory, and consequently performs well on a
wide variety of computing architectures.Comment: 21 pages, 20 figure
- …