1,381 research outputs found

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Correlated Multimodal Imaging in Life Sciences:Expanding the Biomedical Horizon

    Get PDF
    International audienceThe frontiers of bioimaging are currently being pushed toward the integration and correlation of several modalities to tackle biomedical research questions holistically and across multiple scales. Correlated Multimodal Imaging (CMI) gathers information about exactly the same specimen with two or more complementary modalities that-in combination-create a composite and complementary view of the sample (including insights into structure, function, dynamics and molecular composition). CMI allows to describe biomedical processes within their overall spatio-temporal context and gain a mechanistic understanding of cells, tissues, diseases or organisms by untangling their molecular mechanisms within their native environment. The two best-established CMI implementations for small animals and model organisms are hardware-fused platforms in preclinical imaging (Hybrid Imaging) and Correlated Light and Electron Microscopy (CLEM) in biological imaging. Although the merits of Preclinical Hybrid Imaging (PHI) and CLEM are well-established, both approaches would benefit from standardization of protocols, ontologies and data handling, and the development of optimized and advanced implementations. Specifically, CMI pipelines that aim at bridging preclinical and biological imaging beyond CLEM and PHI are rare but bear great potential to substantially advance both bioimaging and biomedical research. CMI faces three mai

    Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging

    Get PDF
    Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details

    Estimation of Fiber Orientations Using Neighborhood Information

    Full text link
    Data from diffusion magnetic resonance imaging (dMRI) can be used to reconstruct fiber tracts, for example, in muscle and white matter. Estimation of fiber orientations (FOs) is a crucial step in the reconstruction process and these estimates can be corrupted by noise. In this paper, a new method called Fiber Orientation Reconstruction using Neighborhood Information (FORNI) is described and shown to reduce the effects of noise and improve FO estimation performance by incorporating spatial consistency. FORNI uses a fixed tensor basis to model the diffusion weighted signals, which has the advantage of providing an explicit relationship between the basis vectors and the FOs. FO spatial coherence is encouraged using weighted l1-norm regularization terms, which contain the interaction of directional information between neighbor voxels. Data fidelity is encouraged using a squared error between the observed and reconstructed diffusion weighted signals. After appropriate weighting of these competing objectives, the resulting objective function is minimized using a block coordinate descent algorithm, and a straightforward parallelization strategy is used to speed up processing. Experiments were performed on a digital crossing phantom, ex vivo tongue dMRI data, and in vivo brain dMRI data for both qualitative and quantitative evaluation. The results demonstrate that FORNI improves the quality of FO estimation over other state of the art algorithms.Comment: Journal paper accepted in Medical Image Analysis. 35 pages and 16 figure

    Photoacoustic tomography: fundamentals, advances and prospects

    Get PDF
    Optical microscopy has been contributing to the development of life science for more than three centuries. However, due to strong optical scattering in tissue, its in vivo imaging ability has been restricted to studies at superficial depths. Advances in photoacoustic tomography (PAT) now allow multiscale imaging at depths from sub-millimeter to several centimeters, with spatial resolutions from sub-micrometer to sub-millimeter. Because of this high scalability and its unique optical absorption contrast, PAT is capable of performing anatomical, functional, molecular and fluid-dynamic imaging at various system levels, and is playing an increasingly important role in fundamental biological research and clinical practice. This Review discusses recent technical progress in PAT and presents corresponding applications. It ends with a discussion of several prospects and their technical challenges
    • …
    corecore