4,840 research outputs found
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Neuroimaging of structural pathology and connectomics in traumatic brain injury: Toward personalized outcome prediction.
Recent contributions to the body of knowledge on traumatic brain injury (TBI) favor the view that multimodal neuroimaging using structural and functional magnetic resonance imaging (MRI and fMRI, respectively) as well as diffusion tensor imaging (DTI) has excellent potential to identify novel biomarkers and predictors of TBI outcome. This is particularly the case when such methods are appropriately combined with volumetric/morphometric analysis of brain structures and with the exploration of TBI-related changes in brain network properties at the level of the connectome. In this context, our present review summarizes recent developments on the roles of these two techniques in the search for novel structural neuroimaging biomarkers that have TBI outcome prognostication value. The themes being explored cover notable trends in this area of research, including (1) the role of advanced MRI processing methods in the analysis of structural pathology, (2) the use of brain connectomics and network analysis to identify outcome biomarkers, and (3) the application of multivariate statistics to predict outcome using neuroimaging metrics. The goal of the review is to draw the community's attention to these recent advances on TBI outcome prediction methods and to encourage the development of new methodologies whereby structural neuroimaging can be used to identify biomarkers of TBI outcome
Bayesian multi-modal model comparison: a case study on the generators of the spike and the wave in generalized spike–wave complexes
We present a novel approach to assess the networks involved in the generation of spontaneous pathological brain activity based on multi-modal imaging data. We propose to use probabilistic fMRI-constrained EEG source reconstruction as a complement to EEG-correlated fMRI analysis to disambiguate between networks that co-occur at the fMRI time resolution. The method is based on Bayesian model comparison, where the different models correspond to different combinations of fMRI-activated (or deactivated) cortical clusters. By computing the model evidence (or marginal likelihood) of each and every candidate source space partition, we can infer the most probable set of fMRI regions that has generated a given EEG scalp data window. We illustrate the method using EEG-correlated fMRI data acquired in a patient with ictal generalized spike–wave (GSW) discharges, to examine whether different networks are involved in the generation of the spike and the wave components, respectively. To this effect, we compared a family of 128 EEG source models, based on the combinations of seven regions haemodynamically involved (deactivated) during a prolonged ictal GSW discharge, namely: bilateral precuneus, bilateral medial frontal gyrus, bilateral middle temporal gyrus, and right cuneus. Bayesian model comparison has revealed the most likely model associated with the spike component to consist of a prefrontal region and bilateral temporal–parietal regions and the most likely model associated with the wave component to comprise the same temporal–parietal regions only. The result supports the hypothesis of different neurophysiological mechanisms underlying the generation of the spike versus wave components of GSW discharges
Recommended from our members
The white matter connectome as an individualized biomarker of language impairment in temporal lobe epilepsy.
ObjectiveThe distributed white matter network underlying language leads to difficulties in extracting clinically meaningful summaries of neural alterations leading to language impairment. Here we determine the predictive ability of the structural connectome (SC), compared with global measures of white matter tract microstructure and clinical data, to discriminate language impaired patients with temporal lobe epilepsy (TLE) from TLE patients without language impairment.MethodsT1- and diffusion-MRI, clinical variables (CVs), and neuropsychological measures of naming and verbal fluency were available for 82 TLE patients. Prediction of language impairment was performed using a robust tree-based classifier (XGBoost) for three models: (1) a CV-model which included demographic and epilepsy-related clinical features, (2) an atlas-based tract-model, including four frontotemporal white matter association tracts implicated in language (i.e., the bilateral arcuate fasciculus, inferior frontal occipital fasciculus, inferior longitudinal fasciculus, and uncinate fasciculus), and (3) a SC-model based on diffusion MRI. For the association tracts, mean fractional anisotropy was calculated as a measure of white matter microstructure for each tract using a diffusion tensor atlas (i.e., AtlasTrack). The SC-model used measurement of cortical-cortical connections arising from a temporal lobe subnetwork derived using probabilistic tractography. Dimensionality reduction of the SC was performed with principal components analysis (PCA). Each model was trained on 49 patients from one epilepsy center and tested on 33 patients from a different center (i.e., an independent dataset). Randomization was performed to test the stability of the results.ResultsThe SC-model yielded a greater area under the curve (AUC; .73) and accuracy (79%) compared to both the tract-model (AUC: .54, p < .001; accuracy: 70%, p < .001) and the CV-model (AUC: .59, p < .001; accuracy: 64%, p < .001). Within the SC-model, lateral temporal connections had the highest importance to model performance, including connections similar to language association tracts such as links between the superior temporal gyrus to pars opercularis. However, in addition to these connections many additional connections that were widely distributed, bilateral and interhemispheric in nature were identified as contributing to SC-model performance.ConclusionThe SC revealed a white matter network contributing to language impairment that was widely distributed, bilateral, and lateral temporal in nature. The distributed network underlying language may be why the SC-model has an advantage in identifying sub-components of the complex fiber networks most relevant for aspects of language performance
PialNN: A fast deep learning framework for cortical pial surface reconstruction
Traditional cortical surface reconstruction is time consuming and limited by the resolution of brain Magnetic Resonance Imaging (MRI). In this work, we introduce Pial Neural Network (PialNN), a 3D deep learning framework for pial surface reconstruction. PialNN is trained end-to-end to deform an initial white matter surface to a target pial surface by a sequence of learned deformation blocks. A local convolutional operation is incorporated in each block to capture the multi-scale MRI information of each vertex and its neighborhood. This is fast and memory-efficient, which allows reconstructing a pial surface mesh with 150k vertices in one second. The performance is evaluated on the Human Connectome Project (HCP) dataset including T1-weighted MRI scans of 300 subjects. The experimental results demonstrate that PialNN reduces the geometric error of the predicted pial surface by 30% compared to state-of-the-art deep learning approaches. The codes are publicly available at https://github.com/m-qiang/PialNN
Deep learning for medical image processing
Medical image segmentation represents a fundamental aspect of medical image computing. It facilitates measurements of anatomical structures, like organ volume and tissue thickness, critical for many classification algorithms which can be instrumental for clinical diagnosis. Consequently, enhancing the efficiency and accuracy of segmentation algorithms could lead to considerable improvements in patient care and diagnostic precision.
In recent years, deep learning has become the state-of-the-art approach in various domains of medical image computing, including medical image segmentation.
The key advantages of deep learning methods are their speed and efficiency, which have the potential to transform clinical practice significantly. Traditional algorithms might require hours to perform complex computations, but with deep learning, such computational tasks can be executed much faster, often within seconds.
This thesis focuses on two distinct segmentation strategies: voxel-based and surface-based.
Voxel-based segmentation assigns a class label to each individual voxel of an image. On the other hand, surface-based segmentation techniques involve reconstructing a 3D surface from the input images, then segmenting that surface into different regions.
This thesis presents multiple methods for voxel-based image segmentation. Here, the focus is segmenting brain structures, white matter hyperintensities, and abdominal organs. Our approaches confront challenges such as domain adaptation, learning with limited data, and optimizing network architectures to handle 3D images. Additionally, the thesis discusses ways to handle the failure cases of standard deep learning approaches, such as dealing with rare cases like patients who have undergone organ resection surgery.
Finally, the thesis turns its attention to cortical surface reconstruction and parcellation. Here, deep learning is used to extract cortical surfaces from MRI scans as triangular meshes and parcellate these surfaces on a vertex level. The challenges posed by this approach include handling irregular and topologically complex structures.
This thesis presents novel deep learning strategies for voxel-based and surface-based medical image segmentation. By addressing specific challenges in each approach, it aims to contribute to the ongoing advancement of medical image computing.Die Segmentierung medizinischer Bilder stellt einen fundamentalen Aspekt der medizinischen Bildverarbeitung dar. Sie erleichtert Messungen anatomischer Strukturen, wie Organvolumen und Gewebedicke, die für viele Klassifikationsalgorithmen entscheidend sein können und somit für klinische Diagnosen von Bedeutung sind. Daher könnten Verbesserungen in der Effizienz und Genauigkeit von Segmentierungsalgorithmen zu erheblichen Fortschritten in der Patientenversorgung und diagnostischen Genauigkeit führen.
Deep Learning hat sich in den letzten Jahren als führender Ansatz in verschiedenen Be-reichen der medizinischen Bildverarbeitung etabliert. Die Hauptvorteile dieser Methoden sind Geschwindigkeit und Effizienz, die die klinische Praxis erheblich verändern können. Traditionelle Algorithmen benötigen möglicherweise Stunden, um komplexe Berechnungen durchzuführen, mit Deep Learning können solche rechenintensiven Aufgaben wesentlich schneller, oft innerhalb von Sekunden, ausgeführt werden.
Diese Dissertation konzentriert sich auf zwei Segmentierungsstrategien, die voxel- und oberflächenbasierte Segmentierung. Die voxelbasierte Segmentierung weist jedem Voxel eines Bildes ein Klassenlabel zu, während oberflächenbasierte Techniken eine 3D-Oberfläche aus den Eingabebildern rekonstruieren und segmentieren.
In dieser Arbeit werden mehrere Methoden für die voxelbasierte Bildsegmentierung vorgestellt. Der Fokus liegt hier auf der Segmentierung von Gehirnstrukturen, Hyperintensitäten der weißen Substanz und abdominellen Organen. Unsere Ansätze begegnen Herausforderungen wie der Anpassung an verschiedene Domänen, dem Lernen mit begrenzten Daten und der Optimierung von Netzwerkarchitekturen, um 3D-Bilder zu verarbeiten. Darüber hinaus werden in dieser Dissertation Möglichkeiten erörtert, mit den Fehlschlägen standardmäßiger Deep-Learning-Ansätze umzugehen, beispielsweise mit seltenen Fällen nach einer Organresektion.
Schließlich legen wir den Fokus auf die Rekonstruktion und Parzellierung von kortikalen Oberflächen. Hier wird Deep Learning verwendet, um kortikale Oberflächen aus MRT-Scans als Dreiecksnetz zu extrahieren und diese Oberflächen auf Knoten-Ebene zu parzellieren. Zu den Herausforderungen dieses Ansatzes gehört der Umgang mit unregelmäßigen und topologisch komplexen Strukturen.
Diese Arbeit stellt neuartige Deep-Learning-Strategien für die voxel- und oberflächenbasierte medizinische Segmentierung vor. Durch die Bewältigung spezifischer Herausforderungen in jedem Ansatz trägt sie so zur Weiterentwicklung der medizinischen Bildverarbeitung bei
Neural deformation fields for template-based reconstruction of cortical surfaces from MRI
The reconstruction of cortical surfaces is a prerequisite for quantitative
analyses of the cerebral cortex in magnetic resonance imaging (MRI). Existing
segmentation-based methods separate the surface registration from the surface
extraction, which is computationally inefficient and prone to distortions. We
introduce Vox2Cortex-Flow (V2C-Flow), a deep mesh-deformation technique that
learns a deformation field from a brain template to the cortical surfaces of an
MRI scan. To this end, we present a geometric neural network that models the
deformation-describing ordinary differential equation in a continuous manner.
The network architecture comprises convolutional and graph-convolutional
layers, which allows it to work with images and meshes at the same time.
V2C-Flow is not only very fast, requiring less than two seconds to infer all
four cortical surfaces, but also establishes vertex-wise correspondences to the
template during reconstruction. In addition, V2C-Flow is the first approach for
cortex reconstruction that models white matter and pial surfaces jointly,
therefore avoiding intersections between them. Our comprehensive experiments on
internal and external test data demonstrate that V2C-Flow results in cortical
surfaces that are state-of-the-art in terms of accuracy. Moreover, we show that
the established correspondences are more consistent than in FreeSurfer and that
they can directly be utilized for cortex parcellation and group analyses of
cortical thickness.Comment: To appear in Medical Image Analysi
In vivo morphometric and mechanical characterization of trabecular bone from high resolution magnetic resonance imaging
La osteoporosis es una enfermedad ósea que se manifiesta con una menor densidad ósea y el deterioro de la arquitectura del hueso esponjoso. Ambos factores aumentan la fragilidad ósea y el riesgo de sufrir fracturas óseas, especialmente en mujeres, donde existe una alta prevalencia. El diagnóstico actual de la osteoporosis se basa en la cuantificación de la densidad mineral ósea (DMO) mediante la técnica de absorciometría dual de rayos X (DXA). Sin embargo, la DMO no puede considerarse de manera aislada para la evaluación del riesgo de fractura o los efectos terapéuticos. Existen otros factores, tales como la disposición microestructural de las trabéculas y sus características que es necesario tener en cuenta para determinar la calidad del hueso y evaluar de manera más directa el riesgo de fractura.
Los avances técnicos de las modalidades de imagen médica, como la tomografía computarizada multidetector (MDCT), la tomografía computarizada periférica cuantitativa (HR-pQCT) y la resonancia magnética (RM) han permitido la adquisición in vivo con resoluciones espaciales elevadas. La estructura del hueso trabecular puede observarse con un buen detalle empleando estas técnicas. En particular, el uso de los equipos de RM de 3 Teslas (T) ha permitido la adquisición con resoluciones espaciales muy altas. Además, el buen contraste entre hueso y médula que proporcionan las imágenes de RM, así como la utilización de radiaciones no ionizantes sitúan a la RM como una técnica muy adecuada para la caracterización in vivo de hueso trabecular en la enfermedad de la osteoporosis.
En la presente tesis se proponen nuevos desarrollos metodológicos para la caracterización morfométrica y mecánica del hueso trabecular en tres dimensiones (3D) y se aplican a adquisiciones de RM de 3T con alta resolución espacial. El análisis morfométrico está compuesto por diferentes algoritmos diseñados para cuantificar la morfología, la complejidad, la topología y los parámetros de anisotropía del tejido trabecular. En cuanto a la caracterización
mecánica, se desarrollaron nuevos métodos que permiten la simulación
automatizada de la estructura del hueso trabecular en condiciones de
compresión y el cálculo del módulo de elasticidad.
La metodología desarrollada se ha aplicado a una población de sujetos sanos
con el fin de obtener los valores de normalidad del hueso esponjoso. Los
algoritmos se han aplicado también a una población de pacientes con
osteoporosis con el fin de cuantificar las variaciones de los parámetros en la
enfermedad y evaluar las diferencias con los resultados obtenidos en un grupo
de sujetos sanos con edad similar.Los desarrollos metodológicos propuestos y las aplicaciones clínicas
proporcionan resultados satisfactorios, presentando los parámetros una alta
sensibilidad a variaciones de la estructura trabecular principalmente
influenciadas por el sexo y el estado de enfermedad. Por otra parte, los métodos
presentan elevada reproducibilidad y precisión en la cuantificación de los
valores morfométricos y mecánicos. Estos resultados refuerzan el uso de los
parámetros presentados como posibles biomarcadores de imagen en la
enfermedad de la osteoporosis.Alberich Bayarri, Á. (2010). In vivo morphometric and mechanical characterization of trabecular bone from high resolution magnetic resonance imaging [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8981Palanci
Enhanced cortical thickness measurements for rodent brains via Lagrangian-based RK4 streamline computation
The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline
- …