56 research outputs found

    Computational Analysis of Brain Images: Towards a Useful Tool in Clinical Practice

    Get PDF

    Fast and Sequence-Adaptive Whole-Brain Segmentation Using Parametric Bayesian Modeling

    Get PDF
    AbstractQuantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data

    A Head Template for Computational Dose Modelling for Transcranial Focused Ultrasound Stimulation

    Get PDF
    Transcranial focused Ultrasound Stimulation (TUS) at low intensities is emerging as a novel non-invasive brain stimulation method with higher spatial resolution than established transcranial stimulation methods and the ability to selectively stimulate also deep brain areas. Accurate control of the focus position and strength of the TUS acoustic waves is important to enable a beneficial use of the high spatial resolution and to ensure safety. As the human skull causes strong attenuation and distortion of the waves, simulations of the transmitted waves are needed to accurately determine the TUS dose distribution inside the cranial cavity. The simulations require information of the skull morphology and its acoustic properties. Ideally, they are informed by computed tomography (CT) images of the individual head. However, suited individual imaging data is often not readily available. For this reason, we here introduce and validate a head template that can be used to estimate the average effects of the skull on the TUS acoustic wave in the population. The template was created from CT images of the heads of 29 individuals of different ages (between 20-50 years), gender and ethnicity using an iterative non-linear co-registration procedure. For validation, we compared acoustic and thermal simulations based on the template to the average of the simulation results of all 29 individual datasets. Acoustic simulations were performed for a model of a focused transducer driven at 500 kHz, placed at 24 standardized positions by means of the EEG 10-10 system. Additional simulations at 250 kHz and 750 kHz at 16 of the positions were used for further confirmation. The amount of ultrasound-induced heating at 500 kHz was estimated for the same 16 transducer positions. Our results show that the template represents the median of the acoustic pressure and temperature maps from the individuals reasonably well in most cases. This underpins the usefulness of the template for the planning and optimization of TUS interventions in studies of healthy young adults. Our results further indicate that the amount of variability between the individual simulation results depends on the position. Specifically, the simulated ultrasound-induced heating inside the skull exhibited strong interindividual variability for three posterior positions close to the midline, caused by a high variability of the local skull shape and composition. This should be taken into account when interpreting simulation results based on the template

    Brain-ID: Learning Contrast-agnostic Anatomical Representations for Brain Imaging

    Full text link
    Recent learning-based approaches have made astonishing advances in calibrated medical imaging like computerized tomography (CT), yet they struggle to generalize in uncalibrated modalities -- notably magnetic resonance (MR) imaging, where performance is highly sensitive to the differences in MR contrast, resolution, and orientation. This prevents broad applicability to diverse real-world clinical protocols. We introduce Brain-ID, an anatomical representation learning model for brain imaging. With the proposed "mild-to-severe" intra-subject generation, Brain-ID is robust to the subject-specific brain anatomy regardless of the appearance of acquired images (e.g., contrast, deformation, resolution, artifacts). Trained entirely on synthetic data, Brain-ID readily adapts to various downstream tasks through only one layer. We present new metrics to validate the intra- and inter-subject robustness of Brain-ID features, and evaluate their performance on four downstream applications, covering contrast-independent (anatomy reconstruction/contrast synthesis, brain segmentation), and contrast-dependent (super-resolution, bias field estimation) tasks. Extensive experiments on six public datasets demonstrate that Brain-ID achieves state-of-the-art performance in all tasks on different MRI modalities and CT, and more importantly, preserves its performance on low-resolution and small datasets. Code is available at https://github.com/peirong26/Brain-ID.Comment: 26 pages, 11 figure
    corecore