11 research outputs found
Feedback-optimized parallel tempering Monte Carlo
We introduce an algorithm to systematically improve the efficiency of
parallel tempering Monte Carlo simulations by optimizing the simulated
temperature set. Our approach is closely related to a recently introduced
adaptive algorithm that optimizes the simulated statistical ensemble in
generalized broad-histogram Monte Carlo simulations. Conventionally, a
temperature set is chosen in such a way that the acceptance rates for replica
swaps between adjacent temperatures are independent of the temperature and
large enough to ensure frequent swaps. In this paper, we show that by choosing
the temperatures with a modified version of the optimized ensemble feedback
method we can minimize the round-trip times between the lowest and highest
temperatures which effectively increases the efficiency of the parallel
tempering algorithm. In particular, the density of temperatures in the
optimized temperature set increases at the "bottlenecks'' of the simulation,
such as phase transitions. In turn, the acceptance rates are now temperature
dependent in the optimized temperature ensemble. We illustrate the
feedback-optimized parallel tempering algorithm by studying the two-dimensional
Ising ferromagnet and the two-dimensional fully-frustrated Ising model, and
briefly discuss possible feedback schemes for systems that require
configurational averages, such as spin glasses.Comment: 12 pages, 14 figure
The effects of LIGO detector noise on a 15-dimensional Markov-chain Monte-Carlo analysis of gravitational-wave signals
Gravitational-wave signals from inspirals of binary compact objects (black
holes and neutron stars) are primary targets of the ongoing searches by
ground-based gravitational-wave (GW) interferometers (LIGO, Virgo, and
GEO-600). We present parameter-estimation results from our Markov-chain
Monte-Carlo code SPINspiral on signals from binaries with precessing spins. Two
data sets are created by injecting simulated GW signals into either synthetic
Gaussian noise or into LIGO detector data. We compute the 15-dimensional
probability-density functions (PDFs) for both data sets, as well as for a data
set containing LIGO data with a known, loud artefact ("glitch"). We show that
the analysis of the signal in detector noise yields accuracies similar to those
obtained using simulated Gaussian noise. We also find that while the Markov
chains from the glitch do not converge, the PDFs would look consistent with a
GW signal present in the data. While our parameter-estimation results are
encouraging, further investigations into how to differentiate an actual GW
signal from noise are necessary.Comment: 11 pages, 2 figures, NRDA09 proceeding
High-Throughput Characterization of Porous Materials Using Graphics Processing Units
We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH and CO) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU