970 research outputs found

    An MRF-UNet Product of Experts for Image Segmentation

    Get PDF
    While convolutional neural networks (CNNs) trained by back-propagation have seen unprecedented success at semantic segmentation tasks, they are known to struggle on out-of-distribution data. Markov random fields (MRFs) on the other hand, encode simpler distributions over labels that, although less flexible than UNets, are less prone to over-fitting. In this paper, we propose to fuse both strategies by computing the product of distributions of a UNet and an MRF. As this product is intractable, we solve for an approximate distribution using an iterative mean-field approach. The resulting MRF-UNet is trained jointly by back-propagation. Compared to other works using conditional random fields (CRFs), the MRF has no dependency on the imaging data, which should allow for less over-fitting. We show on 3D neuroimaging data that this novel network improves generalisation to out-of-distribution samples. Furthermore, it allows the overall number of parameters to be reduced while preserving high accuracy. These results suggest that a classic MRF smoothness prior can allow for less over-fitting when principally integrated into a CNN model. Our implementation is available at https://github.com/balbasty/nitorch

    NeuroNorm:An R package to standardize multiple structural MRI

    Get PDF
    Preprocessing of structural MRI involves multiple steps to clean and standardize data before further analysis. Typically, researchers use numerous tools to create tailored preprocessing workflows that adjust to their dataset. This process hinders research reproducibility and transparency. In this paper, we introduce NeuroNorm, a robust and reproducible preprocessing pipeline that addresses the challenges of preparing structural MRI data. NeuroNorm adapts its workflow to the input datasets without manual intervention and uses state-of-the-art methods to guarantee high-standard results. We demonstrate NeuroNorm’s strength by preprocessing hundreds of MRI scans from three different sources with specific parameters on image dimensions, voxel intensity ranges, patients characteristics, acquisition protocols and scanner type. The preprocessed images can be visually and analytically compared to each other as they share the same geometrical and intensity space. NeuroNorm supports clinicians and researchers with a robust, adaptive and comprehensible preprocessing pipeline, increasing and certifying the sensitivity and validity of subsequent analyses. NeuroNorm requires minimal user inputs and interaction, making it a userfriendly set of tools for users with basic programming experience

    Growing importance of brain morphometry analysis in the clinical routine: The hidden impact of MR sequence parameters.

    Get PDF
    Volumetric assessment based on structural MRI is increasingly recognized as an auxiliary tool to visual reading, also in examinations acquired in the clinical routine. However, MRI acquisition parameters can significantly influence these measures, which must be considered when interpreting the results on an individual patient level. This Technical Note shall demonstrate the problem. Using data from a dedicated experiment, we show the influence of two crucial sequence parameters on the GM/WM contrast and their impact on the measured volumes. A simulated contrast derived from acquisition parameters TI/TR may serve as surrogate and is highly correlated (r=0.96) with the measured contrast

    Removing inter-subject technical variability in magnetic resonance imaging studies

    Get PDF
    Magnetic resonance imaging (MRI) intensities are acquired in arbitrary units, making scans non-comparable across sites and between subjects. Intensity normalization is a first step for the improvement of comparability of the images across subjects. However, we show that unwanted inter-scan variability associated with imaging site, scanner effect and other technical artifacts is still present after standard intensity normalization in large multi-site neuroimaging studies. We propose RAVEL (Removal of Artificial Voxel Effect by Linear regression), a tool to remove residual technical variability after intensity normalization. As proposed by SVA and RUV [Leek and Storey, 2007, 2008, Gagnon-Bartsch and Speed, 2012], two batch effect correction tools largely used in genomics, we decompose the voxel intensities of images registered to a template into a biological component and an unwanted variation component. The unwanted variation component is estimated from a control region obtained from the cerebrospinal fluid (CSF), where intensities are known to be unassociated with disease status and other clinical covariates. We perform a singular value decomposition (SVD) of the control voxels to estimate factors of unwanted variation. We then estimate the unwanted factors using linear regression for every voxel of the brain and take the residuals as the RAVEL-corrected intensities. We assess the performance of RAVEL using T1-weighted (T1-w) images from more than 900 subjects with Alzheimer’s disease (AD) and mild cognitive impairment (MCI), as well as healthy controls from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. We compare RAVEL to intensity-normalization-only methods, histogram matching, and White Stripe. We show that RAVEL performs best at improving the replicability of the brain regions that are empirically found to be most associated with AD, and that these regions are significantly more present in structures impacted by AD (hippocampus, amygdala, parahippocampal gyrus, enthorinal area and fornix stria terminals). In addition, we show that the RAVEL-corrected intensities have the best performance in distinguishing between MCI subjects and healthy subjects by using the mean hippocampal intensity (AUC=67%), a marked improvement compared to results from intensity normalization alone (AUC=63% and 59% for histogram matching and White Stripe, respectively). RAVEL is generalizable to many imaging modalities, and shows promise for longitudinal studies. Additionally, because the choice of the control region is left to the user, RAVEL can be applied in studies of many brain disorders

    PyRaDiSe: A Python package for DICOM-RT-based auto-segmentation pipeline construction and DICOM-RT data conversion.

    Get PDF
    BACKGROUND AND OBJECTIVE Despite fast evolution cycles in deep learning methodologies for medical imaging in radiotherapy, auto-segmentation solutions rarely run in clinics due to the lack of open-source frameworks feasible for processing DICOM RT Structure Sets. Besides this shortage, available open-source DICOM RT Structure Set converters rely exclusively on 2D reconstruction approaches leading to pixelated contours with potentially low acceptance by healthcare professionals. PyRaDiSe, an open-source, deep learning framework independent Python package, addresses these issues by providing a framework for building auto-segmentation solutions feasible to operate directly on DICOM data. In addition, PyRaDiSe provides profound DICOM RT Structure Set conversion and processing capabilities; thus, it applies also to auto-segmentation-related tasks, such as dataset construction for deep learning model training. METHODS The PyRaDiSe package follows a holistic approach and provides DICOM data handling, deep learning model inference, pre-processing, and post-processing functionalities. The DICOM data handling allows for highly automated and flexible handling of DICOM image series, DICOM RT Structure Sets, and DICOM registrations, including 2D-based and 3D-based conversion from and to DICOM RT Structure Sets. For deep learning model inference, extending given skeleton classes is straightforwardly achieved, allowing for employing any deep learning framework. Furthermore, a profound set of pre-processing and post-processing routines is included that incorporate partial invertibility for restoring spatial properties, such as image origin or orientation. RESULTS The PyRaDiSe package, characterized by its flexibility and automated routines, allows for fast deployment and prototyping, reducing efforts for auto-segmentation pipeline implementation. Furthermore, while deep learning model inference is independent of the deep learning framework, it can easily be integrated into famous deep learning frameworks such as PyTorch or Tensorflow. The developed package has successfully demonstrated its capabilities in a research project at our institution for organs-at-risk segmentation in brain tumor patients. Furthermore, PyRaDiSe has shown its conversion performance for dataset construction. CONCLUSIONS The PyRaDiSe package closes the gap between data science and clinical radiotherapy by enabling deep learning segmentation models to be easily transferred into clinical research practice. PyRaDiSe is available on https://github.com/ubern-mia/pyradise and can be installed directly from the Python Package Index using pip install pyradise
    • …
    corecore