9,237 research outputs found

    Random action of compact Lie groups and minimax estimation of a mean pattern

    Get PDF
    This paper considers the problem of estimating a mean pattern in the setting of Grenander's pattern theory. Shape variability in a data set of curves or images is modeled by the random action of elements in a compact Lie group on an infinite dimensional space. In the case of observations contaminated by an additive Gaussian white noise, it is shown that estimating a reference template in the setting of Grenander's pattern theory falls into the category of deconvolution problems over Lie groups. To obtain this result, we build an estimator of a mean pattern by using Fourier deconvolution and harmonic analysis on compact Lie groups. In an asymptotic setting where the number of observed curves or images tends to infinity, we derive upper and lower bounds for the minimax quadratic risk over Sobolev balls. This rate depends on the smoothness of the density of the random Lie group elements representing shape variability in the data, which makes a connection between estimating a mean pattern and standard deconvolution problems in nonparametric statistics

    Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution

    Get PDF
    We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD) that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI) data via constrained spherical deconvolution (CSD). Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC), a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning

    Reference-less detection, astrometry, and photometry of faint companions with adaptive optics

    Full text link
    We propose a complete framework for the detection, astrometry, and photometry of faint companions from a sequence of adaptive optics corrected short exposures. The algorithms exploit the difference in statistics between the on-axis and off-axis intensity. Using moderate-Strehl ratio data obtained with the natural guide star adaptive optics system on the Lick Observatory's 3-m Shane Telescope, we compare these methods to the standard approach of PSF fitting. We give detection limits for the Lick system, as well as a first guide to expected accuracy of differential photometry and astrometry with the new techniques. The proposed approach to detection offers a new way of determining dynamic range, while the new algorithms for differential photometry and astrometry yield accurate results for very faint and close-in companions where PSF fitting fails. All three proposed algorithms are self-calibrating, i.e. they do not require observation of a calibration star thus improving the observing efficiency.Comment: Astrophysical Journal 698 (2009) 28-4

    On random tomography with unobservable projection angles

    Get PDF
    We formulate and investigate a statistical inverse problem of a random tomographic nature, where a probability density function on R3\mathbb{R}^3 is to be recovered from observation of finitely many of its two-dimensional projections in random and unobservable directions. Such a problem is distinct from the classic problem of tomography where both the projections and the unit vectors normal to the projection plane are observable. The problem arises in single particle electron microscopy, a powerful method that biophysicists employ to learn the structure of biological macromolecules. Strictly speaking, the problem is unidentifiable and an appropriate reformulation is suggested hinging on ideas from Kendall's theory of shape. Within this setup, we demonstrate that a consistent solution to the problem may be derived, without attempting to estimate the unknown angles, if the density is assumed to admit a mixture representation.Comment: Published in at http://dx.doi.org/10.1214/08-AOS673 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A stochastic model dissects cell states in biological transition processes

    Get PDF
    Many biological processes, including differentiation, reprogramming, and disease transformations, involve transitions of cells through distinct states. Direct, unbiased investigation of cell states and their transitions is challenging due to several factors, including limitations of single-cell assays. Here we present a stochastic model of cellular transitions that allows underlying single-cell information, including cell-state-specific parameters and rates governing transitions between states, to be estimated from genome-wide, population-averaged time-course data. The key novelty of our approach lies in specifying latent stochastic models at the single-cell level, and then aggregating these models to give a likelihood that links parameters at the single-cell level to observables at the population level. We apply our approach in the context of reprogramming to pluripotency. This yields new insights, including profiles of two intermediate cell states, that are supported by independent single-cell studies. Our model provides a general conceptual framework for the study of cell transitions, including epigenetic transformations

    Variational semi-blind sparse deconvolution with orthogonal kernel bases and its application to MRFM

    Get PDF
    We present a variational Bayesian method of joint image reconstruction and point spread function (PSF) estimation when the PSF of the imaging device is only partially known. To solve this semi-blind deconvolution problem, prior distributions are specified for the PSF and the 3D image. Joint image reconstruction and PSF estimation is then performed within a Bayesian framework, using a variational algorithm to estimate the posterior distribution. The image prior distribution imposes an explicit atomic measure that corresponds to image sparsity. Importantly, the proposed Bayesian deconvolution algorithm does not require hand tuning. Simulation results clearly demonstrate that the semi-blind deconvolution algorithm compares favorably with previous Markov chain Monte Carlo (MCMC) version of myopic sparse reconstruction. It significantly outperforms mismatched non-blind algorithms that rely on the assumption of the perfect knowledge of the PSF. The algorithm is illustrated on real data from magnetic resonance force microscopy (MRFM)
    • …
    corecore