17,631 research outputs found
Automatic Markov Chain Monte Carlo Procedures for Sampling from Multivariate Distributions
Generating samples from multivariate distributions efficiently is an important task in Monte Carlo integration and many other stochastic simulation problems. Markov chain Monte Carlo has been shown to be very efficient compared to "conventional methods", especially when many dimensions are involved. In this article we propose a Hit-and-Run sampler in combination with the Ratio-of-Uniforms method. We show that it is well suited for an algorithm to generate points from quite arbitrary distributions, which include all log-concave distributions. The algorithm works automatically in the sense that only the mode (or an approximation of it) and an oracle is required, i.e., a subroutine that returns the value of the density function at any point x. We show that the number of evaluations of the density increases slowly with dimension. (author's abstract)Series: Preprint Series / Department of Applied Statistics and Data Processin
Fast MCMC sampling algorithms on polytopes
We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and
the John walk, for generating samples from the uniform distribution over a
polytope. Both random walks are sampling algorithms derived from interior point
methods. The former is based on volumetric-logarithmic barrier introduced by
Vaidya whereas the latter uses John's ellipsoids. We show that the Vaidya walk
mixes in significantly fewer steps than the logarithmic-barrier based Dikin
walk studied in past work. For a polytope in defined by
linear constraints, we show that the mixing time from a warm start is bounded
as , compared to the mixing time
bound for the Dikin walk. The cost of each step of the Vaidya walk is of the
same order as the Dikin walk, and at most twice as large in terms of constant
pre-factors. For the John walk, we prove an
bound on its mixing time and conjecture
that an improved variant of it could achieve a mixing time of
. Additionally, we propose variants
of the Vaidya and John walks that mix in polynomial time from a deterministic
starting point. The speed-up of the Vaidya walk over the Dikin walk are
illustrated in numerical examples.Comment: 86 pages, 9 figures, First two authors contributed equall
HAPPY: Hybrid Address-based Page Policy in DRAMs
Memory controllers have used static page closure policies to decide whether a
row should be left open, open-page policy, or closed immediately, close-page
policy, after the row has been accessed. The appropriate choice for a
particular access can reduce the average memory latency. However, since
application access patterns change at run time, static page policies cannot
guarantee to deliver optimum execution time. Hybrid page policies have been
investigated as a means of covering these dynamic scenarios and are now
implemented in state-of-the-art processors. Hybrid page policies switch between
open-page and close-page policies while the application is running, by
monitoring the access pattern of row hits/conflicts and predicting future
behavior. Unfortunately, as the size of DRAM memory increases, fine-grain
tracking and analysis of memory access patterns does not remain practical. We
propose a compact memory address-based encoding technique which can improve or
maintain the performance of DRAMs page closure predictors while reducing the
hardware overhead in comparison with state-of-the-art techniques. As a case
study, we integrate our technique, HAPPY, with a state-of-the-art monitor, the
Intel-adaptive open-page policy predictor employed by the Intel Xeon X5650, and
a traditional Hybrid page policy. We evaluate them across 70 memory intensive
workload mixes consisting of single-thread and multi-thread applications. The
experimental results show that using the HAPPY encoding applied to the
Intel-adaptive page closure policy can reduce the hardware overhead by 5X for
the evaluated 64 GB memory (up to 40X for a 512 GB memory) while maintaining
the prediction accuracy
Dark matter search in a Beam-Dump eXperiment (BDX) at Jefferson Lab
MeV-GeV dark matter (DM) is theoretically well motivated but remarkably
unexplored. This Letter of Intent presents the MeV-GeV DM discovery potential
for a 1 m segmented plastic scintillator detector placed downstream of the
beam-dump at one of the high intensity JLab experimental Halls, receiving up to
10 electrons-on-target (EOT) in a one-year period. This experiment
(Beam-Dump eXperiment or BDX) is sensitive to DM-nucleon elastic scattering at
the level of a thousand counts per year, with very low threshold recoil
energies (1 MeV), and limited only by reducible cosmogenic backgrounds.
Sensitivity to DM-electron elastic scattering and/or inelastic DM would be
below 10 counts per year after requiring all electromagnetic showers in the
detector to exceed a few-hundred MeV, which dramatically reduces or altogether
eliminates all backgrounds. Detailed Monte Carlo simulations are in progress to
finalize the detector design and experimental set up. An existing 0.036 m
prototype based on the same technology will be used to validate simulations
with background rate estimates, driving the necessary RD towards an
optimized detector. The final detector design and experimental set up will be
presented in a full proposal to be submitted to the next JLab PAC. A fully
realized experiment would be sensitive to large regions of DM parameter space,
exceeding the discovery potential of existing and planned experiments by two
orders of magnitude in the MeV-GeV DM mass range.Comment: 28 pages, 17 figures, submitted to JLab PAC 4
- …