1,503 research outputs found

    POINT SPREAD FUNCTION ESTIMATION AND UNCERTAINTY QUANTIFICATION

    Get PDF
    An important component of analyzing images quantitatively is modeling image blur due to eects from the system for image capture. When the eect of image blur is assumed to be translation invariant and isotropic, it can be generally modeled as convolution with a radially symmetric kernel, called the point spread function (PSF). Standard techniques for estimating the PSF involve imaging a bright point source, but this is not always feasible (e.g. high energy radiography). This work provides a novel non-parametric approach to estimating the PSF from a calibration image of a vertical edge. Moreover, the approach is within a hierarchical Bayesian framework that in addition to providing a method for estimation, also gives a quantification of uncertainty in the estimate by Markov Chain Monte Carlo (MCMC) methods. In the development, we employ a recently developed enhancement to Gibbs sampling, referred to as partial collapse. The improved algorithm has been independently derived in several other works, however, it has been shown that partial collapse may be improperly implemented resulting in a sampling algorithm that that no longer converges to the desired posterior. The algorithm we present is proven to satisfy invariance with respect to the target density. This work and its implementation on radiographic data from the U.S. Department of Energy\u27s Cygnus high-energy X-ray diagnostic system have culminated in a paper titled \Partially Collapsed Gibbs Samplers for Linear Inverse Problems and Applications to X-ray Imaging. The other component of this work is mainly theoretical and develops the requisite functional analysis to make the integration based model derived in the first chapter rigorous. The literature source is from functional analysis related to distribution theory for linear partial differential equations, and briefly addresses infinite dimensional probability theory for Hilbert space-valued stochastic processes, a burgeoning and very active research area for the analysis of inverse problems. To our knowledge, this provides a new development of a notion of radial symmetry for L2 based distributions. This work results in defining an L2 complete space of radially symmetric distributions, which is an important step toward rigorously placing the PSF estimation problem in the infinite dimensional framework and is part of ongoing work toward that end

    Bayesian image restoration and bacteria detection in optical endomicroscopy

    Get PDF
    Optical microscopy systems can be used to obtain high-resolution microscopic images of tissue cultures and ex vivo tissue samples. This imaging technique can be translated for in vivo, in situ applications by using optical fibres and miniature optics. Fibred optical endomicroscopy (OEM) can enable optical biopsy in organs inaccessible by any other imaging systems, and hence can provide rapid and accurate diagnosis in a short time. The raw data the system produce is difficult to interpret as it is modulated by a fibre bundle pattern, producing what is called the “honeycomb effect”. Moreover, the data is further degraded due to the fibre core cross coupling problem. On the other hand, there is an unmet clinical need for automatic tools that can help the clinicians to detect fluorescently labelled bacteria in distal lung images. The aim of this thesis is to develop advanced image processing algorithms that can address the above mentioned problems. First, we provide a statistical model for the fibre core cross coupling problem and the sparse sampling by imaging fibre bundles (honeycomb artefact), which are formulated here as a restoration problem for the first time in the literature. We then introduce a non-linear interpolation method, based on Gaussian processes regression, in order to recover an interpretable scene from the deconvolved data. Second, we develop two bacteria detection algorithms, each of which provides different characteristics. The first approach considers joint formulation to the sparse coding and anomaly detection problems. The anomalies here are considered as candidate bacteria, which are annotated with the help of a trained clinician. Although this approach provides good detection performance and outperforms existing methods in the literature, the user has to carefully tune some crucial model parameters. Hence, we propose a more adaptive approach, for which a Bayesian framework is adopted. This approach not only outperforms the proposed supervised approach and existing methods in the literature but also provides computation time that competes with optimization-based methods

    Bayesian estimation of luminosity distributions and model based classification of astrophysical sources

    Get PDF
    The distribution of the flux (observed luminosity) of astrophysical objects is of great interest as a measure of the evolution of various types of astronomical source populations and for testing theoretical assumptions about the Universe. This distribution is examined using the cumulative distribution of the number of sources (N) detected at a given flux (S), known as the log(N)−log(S) curve to astronomers. Estimating the log(N) − log(S) curve from observational data can be quite challenging though, since statistical fluctuations in the measurements and detector biases often lead to measurement uncertainties. Moreover, the location of the source with respect to the centre of observation and the background contamination can lead to non-detection of sources (missing data). This phenomenon becomes more apparent for low flux objects, thus indicating that the missing data mechanism is non-ignorable. In order to avoid inferential biases, it is vital that the different sources of uncertainties, po- tential bias and missing data mechanism be properly accounted for. However, the majority of the methods in the relevant literature for estimating the log(N)−log(S) curve are based on the assumption of complete surveys with non missing data. In this thesis, we present a Bayesian hierarchical model that properly accounts for the missing data mechanism and the other sources of uncertainty. More specifically, we model the joint distribution of the complete data and model parameters and then derive the posterior distribution of the model parameters marginalised across all missing data information. We utilise a Blocked Gibbs sampler in order to extract samples from the joint posterior distribution of the parameters of interest. By using a Bayesian approach, we produce a posterior distribution for the log(N) − log(S) curve instead of a best-fit estimate. We apply this method to the Chandra Deep Field South (CDFS) dataset. Furthermore, approaching this complicated problem from a fully Bayesian angle enables us to appropriately model the uncertainty about the conversion factor between observed source photon counts and observed luminosity. Using relevant spectral data for the observed sources, the uncertainty about the flux-to-count conversion factor γ for each observed source is expressed through MCMC draws from the posterior distribution of γ for each source. In order to account for this uncertainty in the non- detected sources, we develop a novel statistical approach for fitting a hierarchical prior on the flux-to-count conversion factor based on the MCMC samples from the observed sources (a statistical approach that can be used in many modelling prob- lems of similar nature). We derive in a similar manner the posterior distribution of the model parameters, marginalised across the missing data, and we explore the impact in our posterior estimates of the parameters of interest in the CDFS dataset. Studying the log(N) − log(S) relationship for different source populations can give us further insight into the differences between the various types of astronomical pop- ulations. Hence, we propose a new soft-clustering scheme for classifying galaxies in different activity classes (Star Forming Galaxies, LINERs, Seyferts and Composites) using simultaneously 4 optical emission-line ratios ([NII]/Hα, [SII]/Hα, [OI]/Hα and [OIII]/Hβ). The most widely used classification approach is based on 3 diagnostic diagrams, which are 2-dimensional projections of those emission line ratios. Those diagnostics assume fixed classification boundaries, which are developed through theoretical models. However, the use of multiple diagnostic diagrams independently of one another often gives contradicting classifications for the same galaxy, and the fact that those diagrams are 2-dimensional projections of a complex multi-dimensional space is limiting the power of those diagnostics. In contrast, we present a data- driven soft clustering scheme that estimates the posterior probability of each galaxy belonging to each activity class. More specifically, we fit a large number of multivariate Gaussian distributions to the Sloan Digital Sky Survey (SDSS) dataset in order to capture local structures and subsequently group the multivariate Gaussian distributions to represent the complex multi-dimensional structure of the joint distribution of the 4 galaxy activity classes. Finally, we discuss how this soft-clustering can lead to estimates of population-specific log(N) − log(S) relationships.Open Acces

    High-resolution diffusion-weighted brain MRI under motion

    Get PDF
    Magnetic resonance imaging is one of the fastest developing medical imaging techniques. It provides excellent soft tissue contrast and has been a leading tool for neuroradiology and neuroscience research over the last decades. One of the possible MR imaging contrasts is the ability to visualize diffusion processes. The method, referred to as diffusion-weighted imaging, is one of the most common clinical contrasts but is prone to artifacts and is challenging to acquire at high resolutions. This thesis aimed to improve the resolution of diffusion weighted imaging, both in a clinical and in a research context. While diffusion-weighted imaging traditionally has been considered a 2D technique the manuscripts and methods presented here explore 3D diffusion acquisitions with isotropic resolution. Acquiring multiple small 3D volumes, or slabs, which are combined into one full volume has been the method of choice in this work. The first paper presented explores a parallel imaging driven multi-echo EPI readout to enable high resolution with reduced geometric distortions. The work performed on diffusion phase correction lead to an understanding that was used for the subsequent multi-slab papers. The second and third papers introduce the diffusion-weighted 3D multi-slab echo-planar imaging technique and explore its advantages and performance. As the method requires a slightly increased acquisition time the need for prospective motion correction became apparent. The forth paper suggests a new motion navigator using the subcutaneous fat surrounding the skull for rigid body head motion estimation, dubbed FatNav. The spatially sparse representation of the fat signal allowed for high parallel imaging acceleration factors, short acquisition times, and reduced geometric distortions of the navigator. The fifth manuscript presents a combination of the high-resolution 3D multi-slab technique and a modified FatNav module. Unlike our first FatNav implementation, using a single sagittal slab, this modified navigator acquired orthogonal projections of the head using the fat signal alone. The combined use of both presented methods provides a promising start for a fully motion corrected high-resolution diffusion acquisition in a clinical setting

    An iterative method to deconvolve coded-mask images

    Get PDF
    Efficent astronomical imaging at energy greater than 20 keV is mainly achieved through modulation, either time (i.e. HXMT) or spatial (i.e. IBIS/INTEGRAL), techniques. Currently, the coded mask technique is widely used with the true spatial intensity distribution reconstructed from the data by the cross-correlation (CC) method. As the sensitivity of instruments increases, so must the angular resolution in order to avoid problems with source confusion. The IBIS 12’ angular resolution is clearly not sufficient to distinguish all the sources in the crowded field of the Galactic Centre. One possibility to overcome this problem is to change the deconvolution method. The objective of this thesis is to evaluate the real imaging capability of the Direct Demodulation (DD) method. It deconvolves incomplete and noisy data by iteratively solving the image formation equation under physical constraints. With the goal of exploiting the DD technique, in the early of the 1990s the HXMT mission was designed, where the imaging capability is obtained through the temporal modulation of the detected counts by a set of mechanical collimators. To achieve this goal, we developed the Lucy-Richardson (LR) code to reconstruct directly hard-X/soft- ray images. It assumes that the data and the noise follow a Poisson distribution and it guarantees the non-negativity of the restored images. For the moment, any kind of regularization or constraint was implemented in the underlying optimization problem, so this will be ill-posed yet. Due to the general nature of the DD and the fact that HXMT has still to fly, the IBIS/INTEGRAL data and its PSF were used to check our own code. The pure geometrical PSF considering only the effects due to the photon propagation from the mask to the detector was created. Our CC code implements the same balanced cross-correlation as the standard software for IBIS/INTEGRAL analysis. The CC deconvolved images are the reference for the image quality obtained with the LR. The great improvement in the theoretical angular resolution and location precision is evident. It is independent on the source position in the total FOV, the iteration number and the source flux. Within the parameters of the simulations used, the LR statistical uncertainty was found to be a factor of 10 smaller than that obtained with the CC. Furthermore, the LR deconvolved images have less fluctuating reconstructed background. The main LR drawback is the flux evaluation of the reconstructed source. It is mainly due to the choice of the correct iteration number. The use of a-priori information about the unknown object allows a complete regularization of the problem, so probably solving the problem with the flux estimation. Keywords: Coded-mask, Lucy, Richardson, INTEGRAL, IBI

    Modeling and evaluation of new collimator geometries in SPECT

    Get PDF
    corecore