159 research outputs found

    DTIPrep: quality control of diffusion-weighted images

    Get PDF
    pre-printIn the last decade, diffusion MRI (dMRI) studies of the human and animal brain have been used to investigate a multitude of pathologies and drug-related effects in neuroscience research. Study after study identifies white matter (WM) degeneration as a crucial biomaker for all these diseases. The tool of choice for studying WM is dMRI however, dMRI has inherently low signal-to-noise ratio and its acquisition requires a relatively long scan time; in fact, the high loads required occasionally stress scanner hardware past the point of physical failure

    Synergy of image analysis for animal and human neuroimaging supports translational research on drug abuse

    Get PDF
    pre-printThe use of structural magnetic resonance imaging (sMRI) and diffusion tensor imaging (DTI) in animal models of neurophysiology is of increasing interest to the neuroscience community. In this work, we present our approach to create optimal translational studies that include both animal and human neuroimaging data within the frameworks of a study of post-natal neuro-development in intra-uterine cocaine-exposure. We propose the use of non-invasive neuroimaging to study developmental brain structural and white matter pathway abnormalities via sMRI and DTI, as advanced MR imaging technology is readily available and automated image analysis methodology have recently been transferred from the human to animal imaging setting. For this purpose, we developed a synergistic, parallel approach to imaging and image analysis for the human and the rodent branch of our study. We propose an equivalent design in both the selection of the developmental assessment stage and the neuroimaging setup. This approach brings significant advantages to study neurobiological features of early brain development that are common to animals and humans but also preserve analysis capabilities only possible in animal research. This paper presents the main framework and individual methods for the proposed cross-species study design, as well as preliminary DTI cross-species comparative results in the intra-uterine cocaine-exposure study

    Groupwise shape correspondence with local features

    Get PDF
    Statistical shape analysis of anatomical structures plays an important role in many medical image analysis applications such as understanding the structural changes in anatomy in various stages of growth or disease. Establishing accurate correspondence across object populations is essential for such statistical shape analysis studies. However, anatomical correspondence is rarely a direct result of spatial proximity of sample points but rather depends on many other features such as local curvature, position with respect to blood vessels, or connectivity to other parts of the anatomy. This dissertation presents a novel method for computing point-based correspondence among populations of surfaces by combining spatial location of the sample points with non-spatial local features. A framework for optimizing correspondence using arbitrary local features is developed. The performance of the correspondence algorithm is objectively assessed using a set of evaluation metrics. The main focus of this research is on correspondence across human cortical surfaces. Statistical analysis of cortical thickness, which is key to many neurological research problems, is the driving problem. I show that incorporating geometric (sulcal depth) and non-geometric (DTI connectivity) knowledge about the cortex significantly improves cortical correspondence compared to existing techniques. Furthermore, I present a framework that is the first to allow the white matter fiber connectivity to be used for improving cortical correspondence

    Self-Supervised CSF Inpainting with Synthetic Atrophy for Improved Accuracy Validation of Cortical Surface Analyses

    Full text link
    Accuracy validation of cortical thickness measurement is a difficult problem due to the lack of ground truth data. To address this need, many methods have been developed to synthetically induce gray matter (GM) atrophy in an MRI via deformable registration, creating a set of images with known changes in cortical thickness. However, these methods often cause blurring in atrophied regions, and cannot simulate realistic atrophy within deep sulci where cerebrospinal fluid (CSF) is obscured or absent. In this paper, we present a solution using a self-supervised inpainting model to generate CSF in these regions and create images with more plausible GM/CSF boundaries. Specifically, we introduce a novel, 3D GAN model that incorporates patch-based dropout training, edge map priors, and sinusoidal positional encoding, all of which are established methods previously limited to 2D domains. We show that our framework significantly improves the quality of the resulting synthetic images and is adaptable to unseen data with fine-tuning. We also demonstrate that our resulting dataset can be employed for accuracy validation of cortical segmentation and thickness measurement.Comment: Accepted at Medical Imaging with Deep Learning (MIDL) 202

    Assessing Test-time Variability for Interactive 3D Medical Image Segmentation with Diverse Point Prompts

    Full text link
    Interactive segmentation model leverages prompts from users to produce robust segmentation. This advancement is facilitated by prompt engineering, where interactive prompts serve as strong priors during test-time. However, this is an inherently subjective and hard-to-reproduce process. The variability in user expertise and inherently ambiguous boundaries in medical images can lead to inconsistent prompt selections, potentially affecting segmentation accuracy. This issue has not yet been extensively explored for medical imaging. In this paper, we assess the test-time variability for interactive medical image segmentation with diverse point prompts. For a given target region, the point is classified into three sub-regions: boundary, margin, and center. Our goal is to identify a straightforward and efficient approach for optimal prompt selection during test-time based on three considerations: (1) benefits of additional prompts, (2) effects of prompt placement, and (3) strategies for optimal prompt selection. We conduct extensive experiments on the public Medical Segmentation Decathlon dataset for challenging colon tumor segmentation task. We suggest an optimal strategy for prompt selection during test-time, supported by comprehensive results. The code is publicly available at https://github.com/MedICL-VU/variabilit

    Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models

    Full text link
    To address prevalent issues in medical imaging, such as data acquisition challenges and label availability, transfer learning from natural to medical image domains serves as a viable strategy to produce reliable segmentation results. However, several existing barriers between domains need to be broken down, including addressing contrast discrepancies, managing anatomical variability, and adapting 2D pretrained models for 3D segmentation tasks. In this paper, we propose ProMISe,a prompt-driven 3D medical image segmentation model using only a single point prompt to leverage knowledge from a pretrained 2D image foundation model. In particular, we use the pretrained vision transformer from the Segment Anything Model (SAM) and integrate lightweight adapters to extract depth-related (3D) spatial context without updating the pretrained weights. For robust results, a hybrid network with complementary encoders is designed, and a boundary-aware loss is proposed to achieve precise boundaries. We evaluate our model on two public datasets for colon and pancreas tumor segmentations, respectively. Compared to the state-of-the-art segmentation methods with and without prompt engineering, our proposed method achieves superior performance. The code is publicly available at https://github.com/MedICL-VU/ProMISe.Comment: updated acknowledgments and fixed typo

    Learning Site-specific Styles for Multi-institutional Unsupervised Cross-modality Domain Adaptation

    Full text link
    Unsupervised cross-modality domain adaptation is a challenging task in medical image analysis, and it becomes more challenging when source and target domain data are collected from multiple institutions. In this paper, we present our solution to tackle the multi-institutional unsupervised domain adaptation for the crossMoDA 2023 challenge. First, we perform unpaired image translation to translate the source domain images to the target domain, where we design a dynamic network to generate synthetic target domain images with controllable, site-specific styles. Afterwards, we train a segmentation model using the synthetic images and further reduce the domain gap by self-training. Our solution achieved the 1st place during both the validation and testing phases of the challenge. The code repository is publicly available at https://github.com/MedICL-VU/crossmoda2023.Comment: crossMoDA 2023 challenge 1st place solutio
    • …
    corecore