9,422 research outputs found

    Meta-analysis of functional neuroimaging data using Bayesian nonparametric binary regression

    Full text link
    In this work we perform a meta-analysis of neuroimaging data, consisting of locations of peak activations identified in 162 separate studies on emotion. Neuroimaging meta-analyses are typically performed using kernel-based methods. However, these methods require the width of the kernel to be set a priori and to be constant across the brain. To address these issues, we propose a fully Bayesian nonparametric binary regression method to perform neuroimaging meta-analyses. In our method, each location (or voxel) has a probability of being a peak activation, and the corresponding probability function is based on a spatially adaptive Gaussian Markov random field (GMRF). We also include parameters in the model to robustify the procedure against miscoding of the voxel response. Posterior inference is implemented using efficient MCMC algorithms extended from those introduced in Holmes and Held [Bayesian Anal. 1 (2006) 145--168]. Our method allows the probability function to be locally adaptive with respect to the covariates, that is, to be smooth in one region of the covariate space and wiggly or even discontinuous in another. Posterior miscoding probabilities for each of the identified voxels can also be obtained, identifying voxels that may have been falsely classified as being activated. Simulation studies and application to the emotion neuroimaging data indicate that our method is superior to standard kernel-based methods.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS523 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

    Full text link
    Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters

    Ray-tracing through the Millennium Simulation: Born corrections and lens-lens coupling in cosmic shear and galaxy-galaxy lensing

    Full text link
    (abridged) We study the accuracy of various approximations to cosmic shear and weak galaxy-galaxy lensing and investigate effects of Born corrections and lens-lens coupling. We use ray-tracing through the Millennium Simulation to calculate various cosmic-shear and galaxy-galaxy-lensing statistics. We compare the results from ray-tracing to semi-analytic predictions. We find: (i) The linear approximation provides an excellent fit to cosmic-shear power spectra as long as the actual matter power spectrum is used as input. Common fitting formulae, however, strongly underestimate the cosmic-shear power spectra. Halo models provide a better fit to cosmic shear-power spectra, but there are still noticeable deviations. (ii) Cosmic-shear B-modes induced by Born corrections and lens-lens coupling are at least three orders of magnitude smaller than cosmic-shear E-modes. Semi-analytic extensions to the linear approximation predict the right order of magnitude for the B-mode. Compared to the ray-tracing results, however, the semi-analytic predictions may differ by a factor two on small scales and also show a different scale dependence. (iii) The linear approximation may under- or overestimate the galaxy-galaxy-lensing shear signal by several percent due to the neglect of magnification bias, which may lead to a correlation between the shear and the observed number density of lenses. We conclude: (i) Current semi-analytic models need to be improved in order to match the degree of statistical accuracy expected for future weak-lensing surveys. (ii) Shear B-modes induced by corrections to the linear approximation are not important for future cosmic-shear surveys. (iii) Magnification bias can be important for galaxy-galaxy-lensing surveys.Comment: version taking comments into accoun

    MCMC with Strings and Branes: The Suburban Algorithm (Extended Version)

    Get PDF
    Motivated by the physics of strings and branes, we develop a class of Markov chain Monte Carlo (MCMC) algorithms involving extended objects. Starting from a collection of parallel Metropolis-Hastings (MH) samplers, we place them on an auxiliary grid, and couple them together via nearest neighbor interactions. This leads to a class of "suburban samplers" (i.e., spread out Metropolis). Coupling the samplers in this way modifies the mixing rate and speed of convergence for the Markov chain, and can in many cases allow a sampler to more easily overcome free energy barriers in a target distribution. We test these general theoretical considerations by performing several numerical experiments. For suburban samplers with a fluctuating grid topology, performance is strongly correlated with the average number of neighbors. Increasing the average number of neighbors above zero initially leads to an increase in performance, though there is a critical connectivity with effective dimension d_eff ~ 1, above which "groupthink" takes over, and the performance of the sampler declines.Comment: v2: 55 pages, 13 figures, references and clarifications added. Published version. This article is an extended version of "MCMC with Strings and Branes: The Suburban Algorithm

    Topological susceptibility and the sampling of field space in Nf=2N_f=2 lattice QCD simulations

    Full text link
    We present a measurement of the topological susceptibility in two flavor QCD. In this observable, large autocorrelations are present and also sizable cutoff effects have to be faced in the continuum extrapolation. Within the statistical accuracy of the computation, the result agrees with the expectation from leading order chiral perturbation theory.Comment: 22 pages, 7 figures; References added, minor clarifications in the text, results unchange

    Topological critical slowing down: variations on a toy model

    Full text link
    Numerical simulations of lattice quantum field theories whose continuum counterparts possess classical solutions with non-trivial topology face a severe critical slowing down as the continuum limit is approached. Standard Monte-Carlo algorithms develop a loss of ergodicity, with the system remaining frozen in configurations with fixed topology. We analyze the problem in a simple toy model, consisting of the path integral formulation of a quantum mechanical particle constrained to move on a circumference. More specifically, we implement for this toy model various techniques which have been proposed to solve or alleviate the problem for more complex systems, like non-abelian gauge theories, and compare them both in the regime of low temperature and in that of very high temperature. Among the various techniques, we consider also a new algorithm which completely solves the freezing problem, but unfortunately is specifically tailored for this particular model and not easily exportable to more complex systems.Comment: 18 pages, 14 eps figures. Some changes and references added. To be published by Phys Rev

    Adding Long Wavelength Modes to an NN-Body Simulation

    Full text link
    We present a new method to add long wavelength power to an evolved NN-body simulation, making use of the Zel'dovich (1970) approximation to change positions and velocities of particles. We describe the theoretical framework of our technique and apply it to a P3^3M cosmological simulation performed on a cube of 100100 Mpc on a side, obtaining a new ``simulation'' of 800800 Mpc on a side. We study the effect of the power added by long waves by mean of several statistics of the density and velocity field, and suggest possible applications of our method to the study of the large-scale structure of the universe.Comment: Revised version, shortened. 15 pages without figures. Accepted for publication in the Astrophysical Journal. Paper and 11 Figures available as .ps.gz files by anonymous ftp at ftp://ftp.mpa-garching.mpg.de/pub/bepi/MA

    Stationary Statistics of Turbulence as an Attractor

    Full text link
    A calculational approach in fluid turbulence is presented. Use is made of the attracting nature of the fluid-dynamic dynamical system. An approach is offered that effectively propagates the statistics in time. Loss of sensitivity to an initial probability density functional and generation of stationary statistical effects is speculated.Comment: A correction to the integration measure on page 6 has been inserte
    • …
    corecore