7,938 research outputs found

    Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach

    Get PDF
    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni- / multi- modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software.Comment: Add new discussion

    Robust Bayesian inference via coarsening

    Full text link
    The standard approach to Bayesian inference is based on the assumption that the distribution of the data belongs to the chosen model class. However, even a small violation of this assumption can have a large impact on the outcome of a Bayesian procedure. We introduce a simple, coherent approach to Bayesian inference that improves robustness to perturbations from the model: rather than condition on the data exactly, one conditions on a neighborhood of the empirical distribution. When using neighborhoods based on relative entropy estimates, the resulting "coarsened" posterior can be approximated by simply tempering the likelihood---that is, by raising it to a fractional power---thus, inference is often easily implemented with standard methods, and one can even obtain analytical solutions when using conjugate priors. Some theoretical properties are derived, and we illustrate the approach with real and simulated data, using mixture models, autoregressive models of unknown order, and variable selection in linear regression

    Biases in estimates of air pollution impacts: the role of omitted variables and measurement errors

    Full text link
    Observational studies often use linear regression to assess the effect of ambient air pollution on outcomes of interest, such as human health outcomes or crop yields. Yet pollution datasets are typically noisy and include only a subset of potentially relevant pollutants, giving rise to both measurement error bias (MEB) and omitted variable bias (OVB). While it is well understood that these biases exist, less is understood about whether these biases tend to be positive or negative, even though it is sometimes falsely claimed that measurement error simply biases regression coefficient estimates towards zero. In this paper, we show that more can be said about the direction of these biases under the realistic assumptions that the concentrations of different types of air pollutants are positively correlated with each other and that each type of pollutant has a nonpositive association with the outcome variable. In particular, we demonstrate both theoretically and using simulations that under these two assumptions, the OVB will typically be negative and that more often than not the MEB for null pollutants or for pollutants that are perfectly measured will be negative. We also provide precise conditions, which are consistent with the assumptions, under which we prove that the biases are guaranteed to be negative. While the discussion in this paper is motivated by studies assessing the effect of air pollutants on crop yields, the findings are also relevant to regression-based studies assessing the effect of air pollutants on human health outcomes

    Multi-Level Shape Representation Using Global Deformations and Locally Adaptive Finite Elements

    Get PDF
    We present a model-based method for the multi-level shape, pose estimation and abstraction of an object’s surface from range data. The surface shape is estimated based on the parameters of a superquadric that is subjected to global deformations (tapering and bending) and a varying number of levels of local deformations. Local deformations are implemented using locally adaptive finite elements whose shape functions are piecewise cubic functions with C1 continuity. The surface pose is estimated based on the model\u27s translational and rotational degrees of freedom. The algorithm first does a coarse fit, solving for a first approximation to the translation, rotation and global deformation parameters and then does several passes of mesh refinement, by locally subdividing triangles based on the distance between the given datapoints and the model. The adaptive finite element algorithm ensures that during subdivision the desirable finite element mesh generation properties of conformity, non-degeneracy and smoothness are maintained. Each pass of the algorithm uses physics-based modeling techniques to iteratively adjust the global and local parameters of the model in response to forces that are computed from approximation errors between the model and the data. We present results demonstrating the multi-level shape representation for both sparse and dense range data

    High-dimensional Log-Error-in-Variable Regression with Applications to Microbial Compositional Data Analysis

    Get PDF
    In microbiome and genomic study, the regression of compositional data has been a crucial tool for identifying microbial taxa or genes that are associated with clinical phenotypes. To account for the variation in sequencing depth, the classic log-contrast model is often used where read counts are normalized into compositions. However, zero read counts and the randomness in covariates remain critical issues. In this article, we introduce a surprisingly simple, interpretable, and efficient method for the estimation of compositional data regression through the lens of a novel high-dimensional log-error-in-variable regression model. The proposed method provides both corrections on sequencing data with possible overdispersion and simultaneously avoids any subjective imputation of zero read counts. We provide theoretical justifications with matching upper and lower bounds for the estimation error. We also consider a general log-error-in-variable regression model with corresponding estimation method to accommodate broader situations. The merit of the procedure is illustrated through real data analysis and simulation studies

    Boolean difference-making: a modern regularity theory of causation

    Get PDF
    A regularity theory of causation analyses type-level causation in terms of Boolean difference-making. The essential ingredient that helps this theoretical framework overcome the well-known problems of Hume's and Mill's classical regularity theoretic proposals is a principle of non-redundancy: only redundancy-free Boolean dependency structures track causation. The first part of this paper argues that the recent regularity theoretic literature has not consistently implemented this principle, for it disregarded two important types of redundancies: componential and structural redundancies. The second part then develops a new variant of a regularity theory that does justice to all types of redundancies and, thereby, provides the first all-inclusive notion of Boolean difference-making
    • …
    corecore