32 research outputs found

    Cross-scanner and cross-protocol multi-shell diffusion MRI data harmonization: algorithms and result

    Get PDF
    Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 ​mT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies

    The Medical Segmentation Decathlon

    Get PDF
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts

    NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image

    Get PDF
    This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole- scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated numerically using the ground-truth HS images and supplied spectral sensitivity functions (ii) a "Real World" track, simulating capture by an uncalibrated and unknown camera, where the HS images are recovered from noisy JPEG-compressed RGB images. A new, larger-than-ever, natural hyperspectral image data set is presented, containing a total of 510 HS images. The Clean and Real World tracks had 103 and 78 registered participants respectively, with 14 teams competing in the final testing phase. A description of the proposed methods, alongside their challenge scores and an extensive evaluation of top performing methods is also provided. They gauge the state-of-the-art in spectral reconstruction from an RGB image

    Angeborene Gefässanomalien in der Maulhöhle bei zwei Kälbern

    Full text link
    Zwei Kälber wurden mit einem angeborenen Tumor an der rostralen mandibulären Gingiva präsentiert. Bei beiden Fällen kam es nach einer chirurgischen Entfernung der Masse zu Rezidiven. Histologisch bestanden beide Massen aus unorganisiert angeordneten, vaskulären Hohlräumen, eingebettet in locker angeordnetem Stroma und Alcianblau-PAS positiver Grundsubstanz. Radiologisch war bei beiden Fällen eine Destruktion der Alveolarfächer zu sehen, die in Fall 1 histologisch mit einem Knochen- ab und umbau und der Einsprossung von viel Bindegewebe vereinbar war. Die Literaturrecherche ergab, dass es keine einheitlichen Kriterien zur korrekten Klassifi zierung solcher Gefässtumoren gibt, was dazu geführt hat, dass vergleichbare Läsionen in der Vergangenheit unterschiedlich benannt wurden. Wir schlagen deshalb vor, solche Veränderungen als angeborene Gefässanomalie zu bezeichnen bis klare morphologische, immunhistochemische und molekulargenetische Differenzierungskriterien zu Verfügung stehen

    Skeleton-based gyri sulci separation for improved assessment of cortical thickness

    No full text
    In order to improve classification of neurological diseases involving cortical thinning, this work proposes an approach for separating gyral and sulcal regions of the human cortex. Using data from magnetic resonance imaging, the skeleton of the brain's white matter was reconstructed and a geodesic distance measure was applied to separate gyri and sulci. Cortical thickness per subregion was measured for the entire cortex and for gyri and sulci individually in 21 patients with Alzheimer's disease, 10 patients with frontotemporal lobar degeneration composed of two subgroups and 13 control subjects. For discrimination using logistic regressions, which was assessed using leave-one-out cross-validation, improved results were obtained in five out of six group comparisons when cortical thickness measurements were constrained to gyral or sulcal regions

    Evaluation of a human neurite growth assay as specific screen for developmental neurotoxicants

    No full text
    Organ-specific in vitro toxicity assays are often highly sensitive, but they lack specificity. We evaluated here examples of assay features that can affect test specificity, and some general procedures are suggested on how positive hits in complex biological assays may be defined. Differentiating human LUHMES cells were used as potential model for developmental neurotoxicity testing. Forty candidate toxicants were screened, and several hits were obtained and confirmed. Although the cells had a definitive neuronal phenotype, the use of a general cell death endpoint in these cultures did not allow specific identification of neurotoxicants. As alternative approach, neurite growth was measured as an organ-specific functional endpoint. We found that neurite extension of developing LUHMES was specifically inhibited by diverse compounds such as colchicine, vincristine, narciclasine, rotenone, cycloheximide, or diquat. These compounds reduced neurite growth at concentrations that did not compromise cell viability, and neurite growth was affected more potently than the integrity of developed neurites of mature neurons. A ratio of the EC50 values of neurite growth inhibition and cell death of >4 provided a robust classifier for compounds associated with a developmental neurotoxic hazard. Screening of unspecific toxicants in the test system always yielded ratios <4. The assay identified also compounds that accelerated neurite growth, such as the rho kinase pathway modifiers blebbistatin or thiazovivin. The negative effects of colchicine or rotenone were completely inhibited by a rho kinase inhibitor. In summary, we suggest that assays using functional endpoints (neurite growth) can specifically identify and characterize (developmental) neurotoxicants

    Optimized data preprocessing for multivariate analysis applied to 99mTc-ECD SPECT data sets of Alzheimer's patients and asymptomatic controls

    No full text
    Multivariate image analysis has shown potential for classification between Alzheimer's disease (AD) patients and healthy controls with a high-diagnostic performance. As image analysis of positron emission tomography (PET) and single photon emission computed tomography (SPECT) data critically depends on appropriate data preprocessing, the focus of this work is to investigate the impact of data preprocessing on the outcome of the analysis, and to identify an optimal data preprocessing method. In this work, technetium-99methylcysteinatedimer (99mTc-ECD) SPECT data sets of 28 AD patients and 28 asymptomatic controls were used for the analysis. For a series of different data preprocessing methods, which includes methods for spatial normalization, smoothing, and intensity normalization, multivariate image analysis based on principal component analysis (PCA) and Fisher discriminant analysis (FDA) was applied. Bootstrap resampling was used to investigate the robustness of the analysis and the classification accuracy, depending on the data preprocessing method. Depending on the combination of preprocessing methods, significant differences regarding the classification accuracy were observed. For 99mTc-ECD SPECT data, the optimal data preprocessing method in terms of robustness and classification accuracy is based on affine registration, smoothing with a Gaussian of 12 mm full width half maximum, and intensity normalization based on the 25% brightest voxels within the whole-brain region

    Cross-scanner and cross-protocol diffusion MRI data harmonisation : A benchmark database and evaluation of algorithms

    No full text
    Diffusion MRI is being used increasingly in studies of the brain and other parts of the body for its ability to provide quantitative measures that are sensitive to changes in tissue microstructure. However, inter-scanner and inter-protocol differences are known to induce significant measurement variability, which in turn jeopardises the ability to obtain 'truly quantitative measures' and challenges the reliable combination of different datasets. Combining datasets from different scanners and/or acquired at different time points could dramatically increase the statistical power of clinical studies, and facilitate multi-centre research. Even though careful harmonisation of acquisition parameters can reduce variability, inter-protocol differences become almost inevitable with improvements in hardware and sequence design over time, even within a site. In this work, we present a benchmark diffusion MRI database of the same subjects acquired on three distinct scanners with different maximum gradient strength (40, 80, and 300 mT/m), and with 'standard' and 'state-of-the-art' protocols, where the latter have higher spatial and angular resolution. The dataset serves as a useful testbed for method development in cross-scanner/cross-protocol diffusion MRI harmonisation and quality enhancement. Using the database, we compare the performance of five different methods for estimating mappings between the scanners and protocols. The results show that cross-scanner harmonisation of single-shell diffusion data sets can reduce the variability between scanners, and highlight the promises and shortcomings of today's data harmonisation techniques
    corecore