235 research outputs found

    Partial epilepsy: A pictorial review of 3 TESLA magnetic resonance imaging features

    Get PDF
    Epilepsy is a disease with serious consequences for patients and society. In many cases seizures are sufficiently disabling to justify surgical evaluation. In this context, Magnetic Resonance Imaging (MRI) is one of the most valuable tools for the preoperative localization of epileptogenic foci. Because these lesions show a large variety of presentations (including subtle imaging characteristics), their analysis requires careful and systematic interpretation of MRI data. Several studies have shown that 3 Tesla (T) MRI provides a better image quality than 1.5 T MRI regarding the detection and characterization of structural lesions, indicating that high-field-strength imaging should be considered for patients with intractable epilepsy who might benefit from surgery. Likewise, advanced MRI postprocessing and quantitative analysis techniques such as thickness and volume measurements of cortical gray matter have emerged and in the near future, these techniques will routinely enable more precise evaluations of such patients. Finally, the familiarity with radiologic findings of the potential epileptogenic substrates in association with combined use of higher field strengths (3 T, 7 T, and greater) and new quantitative analytical post-processing techniques will lead to improvements regarding the clinical imaging of these patients. We present a pictorial review of the major pathologies related to partial epilepsy, highlighting the key findings of 3 T MRI

    Multi-template approaches for segmenting the hippocampus: the case of the SACHA software

    Get PDF
    International audienceThe hippocampus has been shown to play a crucial role in memory and learning. Its volumetry is a well-established biomarker of Alzheimer’s Disease (AD) and hippocampal sclerosis in temporal lobe epilepsy (TLE). Manual segmentation being time consuming and suffering from low reproducibility,robust automatic segmentation from routine T1 images is of high interest for studying large datasets. We previously proposed such an approach (SACHA, Chupin et al, 2007, 2009), based on competitive region deformation constrained by both anatomical landmarks and a single probabilistic template of 16 young healthy subjects registered using SPM5. The atlas being introduced as a soft constraint, robust results have been obtained in large series of patients with various pathologies. In recent years, multitemplate approaches have proven to be a powerful mean to increase segmentation robustness (Barnes et al, 2008) (Aljabar et al, 2009) (Heckemann et al 2006), more specifically for subjects with very large atrophy or atypical shapes (such as malrotations (Bernasconi et al, 2005) (Kim et al, 2012)).We propose here to evaluate the introduction of multiple-template constraints in SACHA

    Introducing Soft Topology Constraints in Deep Learning-based Segmentation using Projected Pooling Loss

    Get PDF
    International audienceDeep learning methods have achieved impressive results for 3D medical image segmentation. However, when the network is only guided by voxel-level information, it may provide anatomically aberrant segmentations. When performing manual segmentations, experts heavily rely on prior anatomical knowledge. Topology is an important prior information due to its stability across patients. Recently, several losses based on persistent homology were proposed to constrain topology. Persistent homology offers a principled way to control topology. However, it is computationally expensive and complex to implement, in particular in 3D. In this paper, we propose a novel loss function to introduce topological priors in deep learning-based segmentation, which is fast to compute and easy to implement. The loss performs a projected pooling within two steps. We first focus on errors from a global perspective by using 3D MaxPooling to obtain projections of 3D data onto three planes: axial, coronal and sagittal. Then, 2D MaxPooling layers with different kernel sizes are used to extract topological features from the multi-view projections. These two steps are combined using only MaxPooling, thus ensuring the efficiency of the loss function. Our approach was evaluated in several medical image datasets (spleen, heart, hippocampus, red nucleus). It allowed reducing topological errors and, in some cases, improving voxel-level accuracy

    Transfer learning from synthetic to routine clinical data for motion artefact detection in brain T1-weighted MRI

    Get PDF
    International audienceClinical data warehouses (CDWs) contain the medical data of millions of patients and represent a great opportunity to develop computational tools. MRIs are particularly sensitive to patient movements during image acquisition, which will result in artefacts (blurring, ghosting and ringing) in the reconstructed image. As a result, a significant number of MRIs in CDWs are unusable because corrupted by these artefacts. Since their manual detection is impossible due to the number of scans, it is necessary to develop a tool to automatically exclude images with motion in order to fully exploit CDWs. In this paper, we propose a CNN for the automatic detection of motion in 3D T1-weighted brain MRI. Our transfer learning approach, based on synthetic motion generation, consists of two steps: a pre-training on research data using synthetic motion, followed by a fine-tuning step to generalise our pre-trained model to clinical data, relying on the manual labelling of 5500 images. The objectives were both (1) to be able to exclude images with severe motion, (2) to detect mild motion artefacts. Our approach achieved excellent accuracy for the first objective with a balanced accuracy nearly similar to that of the annotators (balanced accuracy>80%). However, for the second objective, the performance was weaker and substantially lower than that of human raters. Overall, our framework will be useful to take advantage of CDWs in medical imaging and to highlight the importance of a clinical validation of models trained on research data

    Fourier Disentangled Multimodal Prior Knowledge Fusion for Red Nucleus Segmentation in Brain MRI

    Full text link
    Early and accurate diagnosis of parkinsonian syndromes is critical to provide appropriate care to patients and for inclusion in therapeutic trials. The red nucleus is a structure of the midbrain that plays an important role in these disorders. It can be visualized using iron-sensitive magnetic resonance imaging (MRI) sequences. Different iron-sensitive contrasts can be produced with MRI. Combining such multimodal data has the potential to improve segmentation of the red nucleus. Current multimodal segmentation algorithms are computationally consuming, cannot deal with missing modalities and need annotations for all modalities. In this paper, we propose a new model that integrates prior knowledge from different contrasts for red nucleus segmentation. The method consists of three main stages. First, it disentangles the image into high-level information representing the brain structure, and low-frequency information representing the contrast. The high-frequency information is then fed into a network to learn anatomical features, while the list of multimodal low-frequency information is processed by another module. Finally, feature fusion is performed to complete the segmentation task. The proposed method was used with several iron-sensitive contrasts (iMag, QSM, R2*, SWI). Experiments demonstrate that our proposed model substantially outperforms a baseline UNet model when the training set size is very small

    Accuracy of MRI Classification Algorithms in a Tertiary Memory Center Clinical Routine Cohort

    Get PDF
    BACKGROUND:Automated volumetry software (AVS) has recently become widely available to neuroradiologists. MRI volumetry with AVS may support the diagnosis of dementias by identifying regional atrophy. Moreover, automatic classifiers using machine learning techniques have recently emerged as promising approaches to assist diagnosis. However, the performance of both AVS and automatic classifiers has been evaluated mostly in the artificial setting of research datasets.OBJECTIVE:Our aim was to evaluate the performance of two AVS and an automatic classifier in the clinical routine condition of a memory clinic.METHODS:We studied 239 patients with cognitive troubles from a single memory center cohort. Using clinical routine T1-weighted MRI, we evaluated the classification performance of: 1) univariate volumetry using two AVS (volBrain and NeuroreaderTM^{TM}); 2) Support Vector Machine (SVM) automatic classifier, using either the AVS volumes (SVM-AVS), or whole gray matter (SVM-WGM); 3) reading by two neuroradiologists. The performance measure was the balanced diagnostic accuracy. The reference standard was consensus diagnosis by three neurologists using clinical, biological (cerebrospinal fluid) and imaging data and following international criteria.RESULTS:Univariate AVS volumetry provided only moderate accuracies (46% to 71% with hippocampal volume). The accuracy improved when using SVM-AVS classifier (52% to 85%), becoming close to that of SVM-WGM (52 to 90%). Visual classification by neuroradiologists ranged between SVM-AVS and SVM-WGM.CONCLUSION:In the routine practice of a memory clinic, the use of volumetric measures provided by AVS yields only moderate accuracy. Automatic classifiers can improve accuracy and could be a useful tool to assist diagnosis

    Predicting the Progression of Mild Cognitive Impairment Using Machine Learning: A Systematic and Quantitative Review

    Get PDF
    Context. Automatically predicting if a subject with Mild Cognitive Impairment (MCI) is going to progress to Alzheimer's disease (AD) dementia in the coming years is a relevant question regarding clinical practice and trial inclusion alike. A large number of articles have been published, with a wide range of algorithms, input variables, data sets and experimental designs. It is unclear which of these factors are determinant for the prediction, and affect the predictive performance that can be expected in clinical practice. We performed a systematic review of studies focusing on the automatic prediction of the progression of MCI to AD dementia. We systematically and statistically studied the influence of different factors on predictive performance. Method. The review included 172 articles, 93 of which were published after 2014. 234 experiments were extracted from these articles. For each of them, we reported the used data set, the feature types (defining 10 categories), the algorithm type (defining 12 categories), performance and potential methodological issues. The impact of the features and algorithm on the performance was evaluated using t-tests on the coefficients of mixed effect linear regressions. Results. We found that using cognitive, fluorodeoxyglucose-positron emission tomog-raphy or potentially electroencephalography and magnetoencephalography variables significantly improves predictive performance compared to not including them (p=0.046, 0.009 and 0.003 respectively), whereas including T1 magnetic resonance imaging, amyloid positron emission tomography or cerebrospinal fluid AD biomarkers does not show a significant effect. On the other hand, the algorithm used in the method does not have a significant impact on performance. We identified several methodological issues. Major issues, found in 23.5% of studies, include the absence of a test set, or its use for feature selection or parameter tuning. Other issues, found in 15.0% of studies, pertain to the usability of the method in clinical practice. We also highlight that short-term predictions are likely not to be better than predicting that subjects stay stable over time. Finally, we highlight possible biases in publications that tend not to publish methods with poor performance on large data sets, which may be censored as negative results. Conclusion. Using machine learning to predict MCI to AD dementia progression is a promising and dynamic field. Among the most predictive modalities, cognitive scores are the cheapest and less invasive, as compared to imaging. The good performance they offer question the wide use of imaging for predicting diagnosis evolution, and call for further exploring fine cognitive assessments. Issues identified in the studies highlight the importance of establishing good practices and guidelines for the use of machine learning as a decision support system in clinical practice

    Automated Analysis of Basal Ganglia Intensity Distribution in Multisequence MRI of the Brain - Application to Creutzfeldt-Jakob Disease

    Get PDF
    We present a method for the analysis of basal ganglia (including the thalamus) for accurate detection of human spongiform encephalopathy in multisequence MRI of the brain. One common feature of most forms of prion protein infections is the appearance of hyperintensities in the deep grey matter area of the brain in T2-weighted MR images. We employ T1, T2 and Flair-T2 MR sequences for the detection of intensity deviations in the internal nuclei. First, the MR data is registered to a probabilistic atlas and normalised in intensity. Then smoothing is applied with edge enhancement. The segmentation of hyperintensities is performed using a model of the human visual system. For more accurate results, a priori anatomical data from a segmented atlas is employed to refine the registration and remove false positives. The results are robust over the patient data and in accordance to the clinical ground truth. Our method further allows the quantification of intensity distributions in basal ganglia. The caudate nuclei are highlighted as main areas of diagnosis of sporadic Creutzfeldt-Jakob Disease (CJD), in agreement with the histological data. The algorithm permitted to classify the intensities of abnormal signals in sporadic CJD patient FLAIR images with a more significant hypersignal in caudate nuclei (10/10) and putamen (6/10) than in thalami. Using normalised measures of the intensity relations between the internal grey nuclei of patients, we robustly differentiate sporadic CJD and new-variant CJD patients, as a first attempt towards an automatic classification tool of human spongiform encephalopathies
    corecore