601,723 research outputs found
A statistical test for Nested Sampling algorithms
Nested sampling is an iterative integration procedure that shrinks the prior
volume towards higher likelihoods by removing a "live" point at a time. A
replacement point is drawn uniformly from the prior above an ever-increasing
likelihood threshold. Thus, the problem of drawing from a space above a certain
likelihood value arises naturally in nested sampling, making algorithms that
solve this problem a key ingredient to the nested sampling framework. If the
drawn points are distributed uniformly, the removal of a point shrinks the
volume in a well-understood way, and the integration of nested sampling is
unbiased. In this work, I develop a statistical test to check whether this is
the case. This "Shrinkage Test" is useful to verify nested sampling algorithms
in a controlled environment. I apply the shrinkage test to a test-problem, and
show that some existing algorithms fail to pass it due to over-optimisation. I
then demonstrate that a simple algorithm can be constructed which is robust
against this type of problem. This RADFRIENDS algorithm is, however,
inefficient in comparison to MULTINEST.Comment: 11 pages, 7 figures. Published in Statistics and Computing, Springer,
September 201
OPENMENDEL: A Cooperative Programming Project for Statistical Genetics
Statistical methods for genomewide association studies (GWAS) continue to
improve. However, the increasing volume and variety of genetic and genomic data
make computational speed and ease of data manipulation mandatory in future
software. In our view, a collaborative effort of statistical geneticists is
required to develop open source software targeted to genetic epidemiology. Our
attempt to meet this need is called the OPENMENDELproject
(https://openmendel.github.io). It aims to (1) enable interactive and
reproducible analyses with informative intermediate results, (2) scale to big
data analytics, (3) embrace parallel and distributed computing, (4) adapt to
rapid hardware evolution, (5) allow cloud computing, (6) allow integration of
varied genetic data types, and (7) foster easy communication between
clinicians, geneticists, statisticians, and computer scientists. This article
reviews and makes recommendations to the genetic epidemiology community in the
context of the OPENMENDEL project.Comment: 16 pages, 2 figures, 2 table
A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes
We present a framework to interactively volume-render three-dimensional data
cubes using distributed ray-casting and volume bricking over a cluster of
workstations powered by one or more graphics processing units (GPUs) and a
multi-core CPU. The main design target for this framework is to provide an
in-core visualization solution able to provide three-dimensional interactive
views of terabyte-sized data cubes. We tested the presented framework using a
computing cluster comprising 64 nodes with a total of 128 GPUs. The framework
proved to be scalable to render a 204 GB data cube with an average of 30 frames
per second. Our performance analyses also compare between using NVIDIA Tesla
1060 and 2050 GPU architectures and the effect of increasing the visualization
output resolution on the rendering performance. Although our initial focus, and
the examples presented in this work, is volume rendering of spectral data cubes
from radio astronomy, we contend that our approach has applicability to other
disciplines where close to real-time volume rendering of terabyte-order 3D data
sets is a requirement.Comment: 13 Pages, 7 figures, has been accepted for publication in
Publications of the Astronomical Society of Australi
- …