942 research outputs found
Tethered Monte Carlo: computing the effective potential without critical slowing down
We present Tethered Monte Carlo, a simple, general purpose method of
computing the effective potential of the order parameter (Helmholtz free
energy). This formalism is based on a new statistical ensemble, closely related
to the micromagnetic one, but with an extended configuration space (through
Creutz-like demons). Canonical averages for arbitrary values of the external
magnetic field are computed without additional simulations. The method is put
to work in the two dimensional Ising model, where the existence of exact
results enables us to perform high precision checks. A rather peculiar feature
of our implementation, which employs a local Metropolis algorithm, is the total
absence, within errors, of critical slowing down for magnetic observables.
Indeed, high accuracy results are presented for lattices as large as L=1024.Comment: 32 pages, 8 eps figures. Corrected Eq. (36), which is wrong in the
published pape
The self distributing virtual machine (SDVM): making computer clusters adaptive
The Self Distributing Virtual Machine (SDVM) is a middleware concept to form a parallel computing machine consisting of a any set of processing units, such as functional units in a processor or FPGA, processing units in a multiprocessor chip, or computers in a computer cluster. Its structure and functionality is biologically inspired aiming towards forming a combined workforce of independent units (”sites”), each acting on the same set of simple rules.
The SDVM supports growing and shrinking the cluster at runtime as well as heterogeneous clusters. It uses the work-stealing principle to dynamically distribute the workload among all sites. The SDVM’s energy management targets the health of all sites by adjusting their power states according to workload and temperature. Dynamic reassignment of the current workload facilitates a new energy policy which focuses on increasing the reliability of each site.
This paper presents the structure and the functionality of the SDVM.1st IFIP International Conference on Biologically Inspired Cooperative Computing - Mechatronics and Computer ClustersRed de Universidades con Carreras en Informática (RedUNCI
Compressive Wave Computation
This paper considers large-scale simulations of wave propagation phenomena.
We argue that it is possible to accurately compute a wavefield by decomposing
it onto a largely incomplete set of eigenfunctions of the Helmholtz operator,
chosen at random, and that this provides a natural way of parallelizing wave
simulations for memory-intensive applications.
This paper shows that L1-Helmholtz recovery makes sense for wave computation,
and identifies a regime in which it is provably effective: the one-dimensional
wave equation with coefficients of small bounded variation. Under suitable
assumptions we show that the number of eigenfunctions needed to evolve a sparse
wavefield defined on N points, accurately with very high probability, is
bounded by C log(N) log(log(N)), where C is related to the desired accuracy and
can be made to grow at a much slower rate than N when the solution is sparse.
The PDE estimates that underlie this result are new to the authors' knowledge
and may be of independent mathematical interest; they include an L1 estimate
for the wave equation, an estimate of extension of eigenfunctions, and a bound
for eigenvalue gaps in Sturm-Liouville problems.
Numerical examples are presented in one spatial dimension and show that as
few as 10 percents of all eigenfunctions can suffice for accurate results.
Finally, we argue that the compressive viewpoint suggests a competitive
parallel algorithm for an adjoint-state inversion method in reflection
seismology.Comment: 45 pages, 4 figure
Dynamical systems and forward-backward algorithms associated with the sum of a convex subdifferential and a monotone cocoercive operator
In a Hilbert framework, we introduce continuous and discrete dynamical
systems which aim at solving inclusions governed by structured monotone
operators , where is the subdifferential of a
convex lower semicontinuous function , and is a monotone cocoercive
operator. We first consider the extension to this setting of the regularized
Newton dynamic with two potentials. Then, we revisit some related dynamical
systems, namely the semigroup of contractions generated by , and the
continuous gradient projection dynamic. By a Lyapunov analysis, we show the
convergence properties of the orbits of these systems.
The time discretization of these dynamics gives various forward-backward
splitting methods (some new) for solving structured monotone inclusions
involving non-potential terms. The convergence of these algorithms is obtained
under classical step size limitation. Perspectives are given in the field of
numerical splitting methods for optimization, and multi-criteria decision
processes.Comment: 25 page
Polarized Scattering in the Vicinty of Galaxies
Some bright cD galaxies in cluster cooling flows have Thomson optical depths
exceeding 0.01. A few percent of their luminosity is scattered and appears as
diffuse polarized emission. We calculate the scattering process for different
geometric combinations of luminosity sources and scattering media. We apply our
results to galaxies, with and without active nuclei, immersed in cooling flows.
We model observations of NGC 1275 and M87 (without active nuclei) in the
presence of sky and galactic background fluxes which hinder the measurement of
the scattered light at optical wavelengths. Current instruments are unable to
detect the scattered light from such objects. However, when a galaxy has an
active nucleus of roughly the same luminosity as the remainder of the galaxy in
V, both the total and polarized scattered intensity should observable on large
scales (5--30kpc), meaning intensity levels greater than 1% of the background
level. For typical AGN and galaxy spectral distributions, the scattering is
most easily detected at short (U) wavelengths. We point out that a number of
such cases will occur. We show that the radiation pattern from the central
nuclear region can be mapped using the scattering. We also show that the
scattered light can be used to measure inhomogeneities in the cooling flow.Comment: 29 pages of TEX, 14 figs, CRSR-1046, in ApJ Nov 20, 199
Big-Data-Driven Materials Science and its FAIR Data Infrastructure
This chapter addresses the forth paradigm of materials research -- big-data
driven materials science. Its concepts and state-of-the-art are described, and
its challenges and chances are discussed. For furthering the field, Open Data
and an all-embracing sharing, an efficient data infrastructure, and the rich
ecosystem of computer codes used in the community are of critical importance.
For shaping this forth paradigm and contributing to the development or
discovery of improved and novel materials, data must be what is now called FAIR
-- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets
the stage for advances of methods from artificial intelligence that operate on
large data sets to find trends and patterns that cannot be obtained from
individual calculations and not even directly from high-throughput studies.
Recent progress is reviewed and demonstrated, and the chapter is concluded by a
forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W.
Andreoni), Springer 2018/201
The self distributing virtual machine (SDVM): making computer clusters adaptive
The Self Distributing Virtual Machine (SDVM) is a middleware concept to form a parallel computing machine consisting of a any set of processing units, such as functional units in a processor or FPGA, processing units in a multiprocessor chip, or computers in a computer cluster. Its structure and functionality is biologically inspired aiming towards forming a combined workforce of independent units (”sites”), each acting on the same set of simple rules.
The SDVM supports growing and shrinking the cluster at runtime as well as heterogeneous clusters. It uses the work-stealing principle to dynamically distribute the workload among all sites. The SDVM’s energy management targets the health of all sites by adjusting their power states according to workload and temperature. Dynamic reassignment of the current workload facilitates a new energy policy which focuses on increasing the reliability of each site.
This paper presents the structure and the functionality of the SDVM.1st IFIP International Conference on Biologically Inspired Cooperative Computing - Mechatronics and Computer ClustersRed de Universidades con Carreras en Informática (RedUNCI
- …