17,010 research outputs found
An Extended and More Sensitive Search for Periodicities in RXTE/ASM X-ray Light Curves
We present the results of a systematic search in approximately 14 years of
Rossi X-ray Timing Explorer All-Sky Monitor data for evidence of periodicities
not reported by Wen et al. (2006). Two variations of the commonly used Fourier
analysis search method have been employed to achieve significant improvements
in sensitivity. The use of these methods and the accumulation of additional
data have resulted in the detection of the signatures of the orbital periods of
eight low-mass X-ray binary systems and of ten high-mass X-ray binaries not
listed in the tables of Wen et al.Comment: 20 pages, 22 figures, in emulateapj format; submitted to ApJ
Introduction to fMRI: experimental design and data analysis
This provides an introduction to functional MRI, experimental design and data analysis procedures using statistical parametric mapping approach
Multiscale adaptive smoothing models for the hemodynamic response function in fMRI
In the event-related functional magnetic resonance imaging (fMRI) data
analysis, there is an extensive interest in accurately and robustly estimating
the hemodynamic response function (HRF) and its associated statistics (e.g.,
the magnitude and duration of the activation). Most methods to date are
developed in the time domain and they have utilized almost exclusively the
temporal information of fMRI data without accounting for the spatial
information. The aim of this paper is to develop a multiscale adaptive
smoothing model (MASM) in the frequency domain by integrating the spatial and
frequency information to adaptively and accurately estimate HRFs pertaining to
each stimulus sequence across all voxels in a three-dimensional (3D) volume. We
use two sets of simulation studies and a real data set to examine the finite
sample performance of MASM in estimating HRFs. Our real and simulated data
analyses confirm that MASM outperforms several other state-of-the-art methods,
such as the smooth finite impulse response (sFIR) model.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS609 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Ten simple rules for reporting voxel-based morphometry studies
Voxel-based morphometry [Ashburner, J. and Friston, K.J., 2000. Voxel-based morphometry—the methods. NeuroImage 11(6 Pt 1), 805–821] is a commonly used tool for studying patterns of brain change in development or disease and neuroanatomical correlates of subject characteristics. In performing a VBM study, many methodological options are available; if the study is to be easily interpretable and repeatable, the processing steps and decisions must be clearly described. Similarly, unusual methods and parameter choices should be justified in order to aid readers in judging the importance of such options or in comparing the work with other studies. This editorial suggests core principles that should be followed and information that should be included when reporting a VBM study in order to make it transparent, replicable and useful
Parallel HOP: A Scalable Halo Finder for Massive Cosmological Data Sets
Modern N-body cosmological simulations contain billions () of dark
matter particles. These simulations require hundreds to thousands of gigabytes
of memory, and employ hundreds to tens of thousands of processing cores on many
compute nodes. In order to study the distribution of dark matter in a
cosmological simulation, the dark matter halos must be identified using a halo
finder, which establishes the halo membership of every particle in the
simulation. The resources required for halo finding are similar to the
requirements for the simulation itself. In particular, simulations have become
too extensive to use commonly-employed halo finders, such that the
computational requirements to identify halos must now be spread across multiple
nodes and cores. Here we present a scalable-parallel halo finding method called
Parallel HOP for large-scale cosmological simulation data. Based on the halo
finder HOP, it utilizes MPI and domain decomposition to distribute the halo
finding workload across multiple compute nodes, enabling analysis of much
larger datasets than is possible with the strictly serial or previous parallel
implementations of HOP. We provide a reference implementation of this method as
a part of the toolkit yt, an analysis toolkit for Adaptive Mesh Refinement
(AMR) data that includes complementary analysis modules. Additionally, we
discuss a suite of benchmarks that demonstrate that this method scales well up
to several hundred tasks and datasets in excess of particles. The
Parallel HOP method and our implementation can be readily applied to any kind
of N-body simulation data and is therefore widely applicable.Comment: 29 pages, 11 figures, 2 table
- …