967 research outputs found
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Investigating microstructural variation in the human hippocampus using non-negative matrix factorization
In this work we use non-negative matrix factorization to identify patterns of microstructural variance in the human hippocampus. We utilize high-resolution structural and diffusion magnetic resonance imaging data from the Human Connectome Project to query hippocampus microstructure on a multivariate, voxelwise basis. Application of non-negative matrix factorization identifies spatial components (clusters of voxels sharing similar covariance patterns), as well as subject weightings (individual variance across hippocampus microstructure). By assessing the stability of spatial components as well as the accuracy of factorization, we identified 4 distinct microstructural components. Furthermore, we quantified the benefit of using multiple microstructural metrics by demonstrating that using three microstructural metrics (T1-weighted/T2-weighted signal, mean diffusivity and fractional anisotropy) produced more stable spatial components than when assessing metrics individually. Finally, we related individual subject weightings to demographic and behavioural measures using a partial least squares analysis. Through this approach we identified interpretable relationships between hippocampus microstructure and demographic and behavioural measures. Taken together, our work suggests non-negative matrix factorization as a spatially specific analytical approach for neuroimaging studies and advocates for the use of multiple metrics for data-driven component analyses
Dimensionality reduction and unsupervised learning techniques applied to clinical psychiatric and neuroimaging phenotypes
Unsupervised learning and other multivariate analysis techniques are increasingly recognized in neuropsychiatric research. Here, finite mixture models and random forests were applied to clinical observations of patients with major depression to detect and validate treatment response subgroups. Further, independent component analysis and agglomerative hierarchical clustering were combined to build a brain parcellation solely on structural covariance information of magnetic resonance brain images. Ăśbersetzte Kurzfassung: UnĂĽberwachtes Lernen und andere multivariate Analyseverfahren werden zunehmend auf neuropsychiatrische Fragestellungen angewendet. Finite mixture Modelle wurden auf klinische Skalen von Patienten mit schwerer Depression appliziert, um Therapieantwortklassen zu bilden und mit Random Forests zu validieren. Unabhängigkeitsanalysen und agglomeratives hierarchisches Clustering wurden kombiniert, um die strukturelle Kovarianz von MagnetresonanzÂtomographie-Bildern fĂĽr eine Hirnparzellierung zu nutzen
On brain atlas choice and automatic segmentation methods: a comparison of MAPER & FreeSurfer using three atlas databases.
Several automatic image segmentation methods and few atlas databases exist for analysing structural T1-weighted magnetic resonance brain images. The impact of choosing a combination has not hitherto been described but may bias comparisons across studies. We evaluated two segmentation methods (MAPER and FreeSurfer), using three publicly available atlas databases (Hammers_mith, Desikan-Killiany-Tourville, and MICCAI 2012 Grand Challenge). For each combination of atlas and method, we conducted a leave-one-out cross-comparison to estimate the segmentation accuracy of FreeSurfer and MAPER. We also used each possible combination to segment two datasets of patients with known structural abnormalities (Alzheimer's disease (AD) and mesial temporal lobe epilepsy with hippocampal sclerosis (HS)) and their matched healthy controls. MAPER was better than FreeSurfer at modelling manual segmentations in the healthy control leave-one-out analyses in two of the three atlas databases, and the Hammers_mith atlas database transferred to new datasets best regardless of segmentation method. Both segmentation methods reliably identified known abnormalities in each patient group. Better separation was seen for FreeSurfer in the AD and left-HS datasets, and for MAPER in the right-HS dataset. We provide detailed quantitative comparisons for multiple anatomical regions, thus enabling researchers to make evidence-based decisions on their choice of atlas and segmentation method
Recommended from our members
Adaptive cortical parcellations for source reconstructed EEG/MEG connectomes.
There is growing interest in the rich temporal and spectral properties of the functional connectome of the brain that are provided by Electro- and Magnetoencephalography (EEG/MEG). However, the problem of leakage between brain sources that arises when reconstructing brain activity from EEG/MEG recordings outside the head makes it difficult to distinguish true connections from spurious connections, even when connections are based on measures that ignore zero-lag dependencies. In particular, standard anatomical parcellations for potential cortical sources tend to over- or under-sample the real spatial resolution of EEG/MEG. By using information from cross-talk functions (CTFs) that objectively describe leakage for a given sensor configuration and distributed source reconstruction method, we introduce methods for optimising the number of parcels while simultaneously minimising the leakage between them. More specifically, we compare two image segmentation algorithms: 1) a split-and-merge (SaM) algorithm based on standard anatomical parcellations and 2) a region growing (RG) algorithm based on all the brain vertices with no prior parcellation. Interestingly, when applied to minimum-norm reconstructions for EEG/MEG configurations from real data, both algorithms yielded approximately 70 parcels despite their different starting points, suggesting that this reflects the resolution limit of this particular sensor configuration and reconstruction method. Importantly, when compared against standard anatomical parcellations, resolution matrices of adaptive parcellations showed notably higher sensitivity and distinguishability of parcels. Furthermore, extensive simulations of realistic networks revealed significant improvements in network reconstruction accuracies, particularly in reducing false leakage-induced connections. Adaptive parcellations therefore allow a more accurate reconstruction of functional EEG/MEG connectomes
Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images
[EN] The human cerebellum plays an essential role in motor control, is involved in cognitive function (i.e., attention, working memory, and language), and helps to regulate emotional responses. Quantitative in-vivo assessment of the cerebellum is important in the study of several neurological diseases including cerebellar ataxia, autism, and schizophrenia. Different structural subdivisions of the cerebellum have been shown to correlate with differing pathologies. To further understand these pathologies, it is helpful to automatically parcellate the cerebellum at the highest fidelity possible. In this paper, we coordinated with colleagues around the world to evaluate automated cerebellum parcellation algorithms on two clinical cohorts showing that the cerebellum can be parcellated to a high accuracy by newer methods. We characterize these various methods at four hierarchical levels: coarse (i.e., whole cerebellum and gross structures), lobe, subdivisions of the vermis, and the lobules. Due to the number of labels, the hierarchy of labels, the number of algorithms, and the two cohorts, we have restricted our analyses to the Dice measure of overlap. Under these conditions, machine learning based methods provide a collection of strategies that are efficient and deliver parcellations of a high standard across both cohorts, surpassing previous work in the area. In conjunction with the rank-sum computation, we identified an overall winning method.The data collection and labeling of the cerebellum was supported in part by the NIH/NINDS grant R01 NS056307 (PI: J.L. Prince) and NIH/NIMH grants R01 MH078160 & R01 MH085328 (PI: S.H. Mostofsky). PMT is supported in part by the NIH/NIBIB grant U54 EB020403. CERES2 development was supported by grant UPV2016-0099 from the Universitat Politecnica de Valencia (PI: J.V. Manjon); the French National Research Agency through the Investments for the future Program IdEx Bordeaux (ANR-10-IDEX-03-02, HL-MRI Project; PI: P. Coupe) and Cluster of excellence CPU and TRAIL (HR-DTI ANR-10-LABX-57; PI: P. Coupe). Support for the development of LiviaNET was provided by the National Science and Engineering Research Council of Canada (NSERC), discovery grant program, and by the ETS Research Chair on Artificial Intelligence in Medical Imaging. The authors wish to acknowledge the invaluable contributions offered by Dr. George Fein (Dept. of Medicine and Psychology, University of Hawaii) in preparing this manuscript.Carass, A.; Cuzzocreo, JL.; Han, S.; Hernandez-Castillo, CR.; Rasser, PE.; Ganz, M.; Beliveau, V.... (2018). Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images. NeuroImage. 183:150-172. https://doi.org/10.1016/j.neuroimage.2018.08.003S15017218
Deep learning for medical image processing
Medical image segmentation represents a fundamental aspect of medical image computing. It facilitates measurements of anatomical structures, like organ volume and tissue thickness, critical for many classification algorithms which can be instrumental for clinical diagnosis. Consequently, enhancing the efficiency and accuracy of segmentation algorithms could lead to considerable improvements in patient care and diagnostic precision.
In recent years, deep learning has become the state-of-the-art approach in various domains of medical image computing, including medical image segmentation.
The key advantages of deep learning methods are their speed and efficiency, which have the potential to transform clinical practice significantly. Traditional algorithms might require hours to perform complex computations, but with deep learning, such computational tasks can be executed much faster, often within seconds.
This thesis focuses on two distinct segmentation strategies: voxel-based and surface-based.
Voxel-based segmentation assigns a class label to each individual voxel of an image. On the other hand, surface-based segmentation techniques involve reconstructing a 3D surface from the input images, then segmenting that surface into different regions.
This thesis presents multiple methods for voxel-based image segmentation. Here, the focus is segmenting brain structures, white matter hyperintensities, and abdominal organs. Our approaches confront challenges such as domain adaptation, learning with limited data, and optimizing network architectures to handle 3D images. Additionally, the thesis discusses ways to handle the failure cases of standard deep learning approaches, such as dealing with rare cases like patients who have undergone organ resection surgery.
Finally, the thesis turns its attention to cortical surface reconstruction and parcellation. Here, deep learning is used to extract cortical surfaces from MRI scans as triangular meshes and parcellate these surfaces on a vertex level. The challenges posed by this approach include handling irregular and topologically complex structures.
This thesis presents novel deep learning strategies for voxel-based and surface-based medical image segmentation. By addressing specific challenges in each approach, it aims to contribute to the ongoing advancement of medical image computing.Die Segmentierung medizinischer Bilder stellt einen fundamentalen Aspekt der medizinischen Bildverarbeitung dar. Sie erleichtert Messungen anatomischer Strukturen, wie Organvolumen und Gewebedicke, die für viele Klassifikationsalgorithmen entscheidend sein können und somit für klinische Diagnosen von Bedeutung sind. Daher könnten Verbesserungen in der Effizienz und Genauigkeit von Segmentierungsalgorithmen zu erheblichen Fortschritten in der Patientenversorgung und diagnostischen Genauigkeit führen.
Deep Learning hat sich in den letzten Jahren als führender Ansatz in verschiedenen Be-reichen der medizinischen Bildverarbeitung etabliert. Die Hauptvorteile dieser Methoden sind Geschwindigkeit und Effizienz, die die klinische Praxis erheblich verändern können. Traditionelle Algorithmen benötigen möglicherweise Stunden, um komplexe Berechnungen durchzuführen, mit Deep Learning können solche rechenintensiven Aufgaben wesentlich schneller, oft innerhalb von Sekunden, ausgeführt werden.
Diese Dissertation konzentriert sich auf zwei Segmentierungsstrategien, die voxel- und oberflächenbasierte Segmentierung. Die voxelbasierte Segmentierung weist jedem Voxel eines Bildes ein Klassenlabel zu, während oberflächenbasierte Techniken eine 3D-Oberfläche aus den Eingabebildern rekonstruieren und segmentieren.
In dieser Arbeit werden mehrere Methoden für die voxelbasierte Bildsegmentierung vorgestellt. Der Fokus liegt hier auf der Segmentierung von Gehirnstrukturen, Hyperintensitäten der weißen Substanz und abdominellen Organen. Unsere Ansätze begegnen Herausforderungen wie der Anpassung an verschiedene Domänen, dem Lernen mit begrenzten Daten und der Optimierung von Netzwerkarchitekturen, um 3D-Bilder zu verarbeiten. Darüber hinaus werden in dieser Dissertation Möglichkeiten erörtert, mit den Fehlschlägen standardmäßiger Deep-Learning-Ansätze umzugehen, beispielsweise mit seltenen Fällen nach einer Organresektion.
Schließlich legen wir den Fokus auf die Rekonstruktion und Parzellierung von kortikalen Oberflächen. Hier wird Deep Learning verwendet, um kortikale Oberflächen aus MRT-Scans als Dreiecksnetz zu extrahieren und diese Oberflächen auf Knoten-Ebene zu parzellieren. Zu den Herausforderungen dieses Ansatzes gehört der Umgang mit unregelmäßigen und topologisch komplexen Strukturen.
Diese Arbeit stellt neuartige Deep-Learning-Strategien für die voxel- und oberflächenbasierte medizinische Segmentierung vor. Durch die Bewältigung spezifischer Herausforderungen in jedem Ansatz trägt sie so zur Weiterentwicklung der medizinischen Bildverarbeitung bei
- …