11 research outputs found
Volumetric relief map for intracranial cerebrospinal fluid distribution analysis
International audienceCerebrospinal fluid imaging plays a significant role in the clinical diagnosis of brain disorders, such as hydrocephalus and Alzheimer's disease. While three-dimensional images of cerebrospinal fluid are very detailed, the complex structures they contain can be time-consuming and laborious to interpret. This paper presents a simple technique that represents the intracranial cerebrospinal fluid distribution as a two-dimensional image in such a way that the total fluid volume is preserved. We call this a volumetric relief map, and show its effectiveness in a characterization and analysis of fluid distributions and networks in hydrocephalus patients and healthy adults
Surface-Based tools for Characterizing the Human Brain Cortical Morphology
Tesis por compendio de publicacionesThe cortex of the human brain is highly convoluted. These characteristic convolutions
present advantages over lissencephalic brains. For instance, gyrification allows an expansion
of cortical surface area without significantly increasing the cranial volume, thus
facilitating the pass of the head through the birth channel. Studying the human brain’s
cortical morphology and the processes leading to the cortical folds has been critical for an
increased understanding of the pathological processes driving psychiatric disorders such
as schizophrenia, bipolar disorders, autism, or major depression. Furthermore, charting
the normal developmental changes in cortical morphology during adolescence or aging
can be of great importance for detecting deviances that may be precursors for pathology.
However, the exact mechanisms that push cortical folding remain largely unknown.
The accurate characterization of the neurodevelopment processes is challenging. Multiple
mechanisms co-occur at a molecular or cellular level and can only be studied through
the analysis of ex-vivo samples, usually of animal models. Magnetic Resonance Imaging
can partially fill the breach, allowing the portrayal of the macroscopic processes surfacing
on in-vivo samples.
Different metrics have been defined to measure cortical structure to describe the brain’s
morphological changes and infer the associated microstructural events. Metrics such as
cortical thickness, surface area, or cortical volume help establish a relation between the
measured voxels on a magnetic resonance image and the underlying biological processes.
However, the existing methods present limitations or room for improvement.
Methods extracting the lines representing the gyral and sulcal morphology tend to
over- or underestimate the total length. These lines can provide important information
about how sulcal and gyral regions function differently due to their distinctive ontogenesis.
Nevertheless, some methods label every small fold on the cortical surface as a sulcal
fundus, thus losing the perspective of lines that travel through the deeper zones of a sulcal
basin. On the other hand, some methods are too restrictive, labeling sulcal fundi only for
a bunch of primary folds.
To overcome this issue, we have proposed a Laplacian-collapse-based algorithm that
can delineate the lines traversing the top regions of the gyri and the fundi of the sulci
avoiding anastomotic sulci. For this, the cortex, represented as a 3D surface, is segmented
into gyral and sulcal surfaces attending to the curvature and depth at every point
of the mesh. Each resulting surface is spatially filtered, smoothing the boundaries. Then,
a Laplacian-collapse-based algorithm is applied to obtain a thinned representation of the
morphology of each structure. These thin curves are processed to detect where the extremities
or endpoints lie. Finally, sulcal fundi and gyral crown lines are obtained by
eroding the surfaces while preserving the structure topology and connectivity between
the endpoints. The assessment of the presented algorithm showed that the labeled sulcal lines were close to the proposed ground truth length values while crossing through the
deeper (and more curved) regions. The tool also obtained reproducibility scores better or
similar to those of previous algorithms.
A second limitation of the existing metrics concerns the measurement of sulcal width.
This metric, understood as the physical distance between the points on opposite sulcal
banks, can come in handy in detecting cortical flattening or complementing the information
provided by cortical thickness, gyrification index, or such features. Nevertheless,
existing methods only provided averaged measurements for different predefined sulcal
regions, greatly restricting the possibilities of sulcal width and ignoring the intra-region
variability.
Regarding this, we developed a method that estimates the distance from each sulcal
point in the cortex to its corresponding opposite, thus providing a per-vertex map of the
physical sulcal distances. For this, the cortical surface is sampled at different depth levels,
detecting the points where the sulcal banks change. The points corresponding to each sulcal
wall are matched with the closest point on a different one. The distance between those
points is the sulcal width. The algorithm was validated against a simulated sulcus that
resembles a simple fold. Then the tool was used on a real dataset and compared against
two widely-used sulcal width estimation methods, averaging the proposed algorithm’s
values into the same region definition those reference tools use. The resulting values were
similar for the proposed and the reference methods, thus demonstrating the algorithm’s
accuracy.
Finally, both algorithms were tested on a real aging population dataset to prove the
methods’ potential in a use-case scenario. The main idea was to elucidate fine-grained
morphological changes in the human cortex with aging by conducting three analyses: a
comparison of the age-dependencies of cortical thickness in gyral and sulcal lines, an
analysis of how the sulcal and gyral length changes with age, and a vertex-wise study of
sulcal width and cortical thickness.
These analyses showed a general flattening of the cortex with aging, with interesting
findings such as a differential age-dependency of thickness thinning in the sulcal and
gyral regions. By demonstrating that our method can detect this difference, our results
can pave the way for future in vivo studies focusing on macro- and microscopic changes
specific to gyri or sulci. Our method can generate new brain-based biomarkers specific
to sulci and gyri, and these can be used on large samples to establish normative models
to which patients can be compared. In parallel, the vertex-wise analyses show that sulcal
width is very sensitive to changes during aging, independent of cortical thickness. This
corroborates the concept of sulcal width as a metric that explains, in the least, the unique
variance of morphology not fully captured by existing metrics. Our method allows for
sulcal width vertex-wise analyses that were not possible previously, potentially changing
our understanding of how changes in sulcal width shape cortical morphology.
In conclusion, this thesis presents two new tools, open source and publicly available, for estimating cortical surface-based morphometrics. The methods have been validated
and assessed against existing algorithms. They have also been tested on a real dataset,
providing new, exciting insights into cortical morphology and showing their potential for
defining innovative biomarkers.Programa de Doctorado en Ciencia y TecnologĂa BiomĂ©dica por la Universidad Carlos III de MadridPresidente: Juan Domingo Gispert LĂłpez.- Secretario: Norberto Malpica González de Vega.- Vocal: Gemma Cristina MontĂ© Rubi
Aprendendo caracterĂsticas de imagens por redes convolucionais sob restrição de dados supervisionados
Orientador: Alexandre Xavier FalcĂŁoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A análise de imagens vem sendo largamente aplicada em diversas áreas das CiĂŞncias e Engenharia, com o intuito de extrair e interpretar o conteĂşdo de interesse em aplicações que variam de uma simple análise de cĂłdigos de barras ao diagnĂłstico automatizado de doenças. Entretanto, as soluções do Estado da Arte baseadas em redes neurais com mĂşltiplas camadas usualmente requerem um elevado nĂşmero de amostras anotadas (rotuladas), implicando em um considerável esforço humano na identificação, isolamento, e anotação dessas amostras em grandes bases de dados. O problema Ă© agravado quando tal anotação requer especialistas no domĂnio da aplicação, tal como em Medicina e Agricultura, constituindo um inconveniente crucial em tais aplicações. Neste contexto, as Redes de Convolução (Convolution Networks - ConvNets), estĂŁo entre as abordagens mais bem sucedidas na extração de caracterĂsticas de imagens, tal que, sua associação com Perceptrons Multi-Camadas (Multi Layer Perceptron - MLP) ou Máquinas de Vetores de Suporte (Support Vector Machines - SVM) permite uma classificação de amostras bastante efetiva. Outro problema importante de tais tĂ©cnicas se encontra na alta dimensionalidade de suas caracterĂsticas, que dificulta o processo de análise da distribuição das amostras por mĂ©todos baseados em distância Euclidiana, como agrupamento e visualização de dados multidimensionais. Considerando tais problemas, avaliamos as principais estratĂ©gias no projeto de ConvNets, a saber, Aprendizado de Arquitetura (Architecture Learning - AL), Aprendizado de Filtros (Filter Learning - FL) e Aprendizado por TransferĂŞncia de DomĂnio (Transfer Learning - TL) em relação a sua capacidade de aprendizado num conjunto limitado de amostras anotadas. E, para confirmar a eficácia no aprendizado de caracterĂsticas, analisamos a melhoria do classificador conforme o nĂşmero de amostras aumenta durante o aprendizado ativo. MĂ©todos de data augmentation tambĂ©m foram avaliados como uma potencial estratĂ©gia para lidar com a ausĂŞncia de amostras anotadas. Finalmente, apresentamos os principais resultados do trabalho numa aplicação real Âż o diagnĂłstico de parasitos intestinais Âż em comparação com os descritores do Estado da Arte. Por fim, pudemos concluir que TL se apresenta como a melhor estratĂ©gia, sob restrição de dados supervisionados, sempre que tivermos uma rede previamente aprendida que se aplique ao problema em questĂŁo. Caso contrário, AL se apresenta como a segunda melhor alternativa. Pudemos ainda observar a eficácia da Análise Discriminante Linear (Linear Discriminant Analysis - LDA) em reduzir consideravelmente o espaço de caracterĂsticas criado pelas ConvNets, permitindo uma melhor compreensĂŁo dos especialistas sobre os processos de aprendizado de caracterĂsticas e aprendizado ativo, por meio de tĂ©cnicas de visualização de dados multidimensionais. Estes importantes resultados sugerem que uma interação entre aprendizado de caracterĂsticas, aprendizado ativo, e especialistas, pode beneficiar consideravelmente o aprendizado de máquinaAbstract: Image analysis has been widely employed in many areas of the Sciences and Engineering to extract and interpret high-level information from images, with applications ranging from a simple bar code analysis to the diagnosis of diseases. However, the state-of-the-art solutions based on deep learning often require a training set with a high number of annotated (labeled) examples. This may imply significant human effort in sample identification, isolation, and labeling from large image databases, specially when image annotation asks for specialists in the application domain, such as in Medicine and Agriculture, such requirement constitutes a crucial drawback. In this context, Convolution Networks (ConvNets) are among the most successful approaches for image feature extraction, such that their combination with a Multi-Layer Perceptron (MLP) network or a Support Vector Machine (SVM) can be used for effective sample classification. Another problem in these techniques is the resulting high-dimension feature space, which makes difficult the analysis of the sample distribution by the commonly used distance based data clustering and visualization methods. In this work, we analyze both problems by assessing the main strategies for ConvNet design, namely Architecture Learning (AL), Filter Learning (FL), and Transfer Learning (TL), according to their capability of learning from a limited number of labeled examples, and by evaluating the impact of feature space reduction techniques in distance-based data classification and visualization. In order to confirm the effectiveness of feature learning, we analyze the progress of the classifier as the number of supervised samples increase during active learning. Data augmentation has also been evaluated as a potential strategy to cope with the absence of labeled examples. Finally, we demonstrate the main results of the work for a real application Âż the diagnosis of intestinal parasites Âż in comparison to the state-of-the-art image descriptors. In conclusion, TL has shown to be the best strategy, under supervised data constraint, whenever we count with a learned network that suits the problem. When this is not the case, AL comes as the second best alternative. We have also observed the effectiveness of Linear Discriminant Analysis (LDA) in considerably reducing the feature space created by ConvNets to allow a better understanding of the feature learning and active learning processes by the expert through data visualization. This important result suggests an interplaying between feature and active learning with intervening of the experts to improve both processes as future workMestradoCiĂŞncia da ComputaçãoMestre em CiĂŞncia da ComputaçãoCNPQCAPE
Supervised pattern classification using optimum path forest
Orientador: Alexandre Xavier FalcĂŁoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Padrões sĂŁo geralmente representados por vetores de atributos obtidos atravĂ©s de amostras em uma base de dados, a qual pode estar totalmente, parcialmente ou nĂŁo rotulada. Dependendo da quantidade de informação disponĂvel dessa base de dados, podemos aplicar trĂŞs tipos de tĂ©cnicas para identificação desses padrões: supervisionadas, semisupervisionadas ou nĂŁo-supervisionadas. No presente trabalho, estudamos tĂ©cnicas supervisionadas, as quais caracterizam-se pelo total conhecimento dos rĂłtulos das amostras da base de dados. Propusemos tambĂ©m um novo mĂ©todo para classificação supervisionada de padrões baseada em Floresta de Caminhos Ă“timos (OPF - Optimum-Path Forest), a qual modela o problema de reconhecimento de padrões como sendo um grafo, onde os nĂłs sĂŁo as amostras e os arcos definidos por uma relação de adjacĂŞncia. Amostras mais relevantes (protĂłtipos) sĂŁo identificadas e um processo de competição entre elas Ă© iniciado, as quais tentam oferecer caminhos de custo Ăłtimo para as demais amostras da base de dados. Apresentamos aqui duas abordagens, as quais diferem na relação de adjacĂŞncia, função de custo de caminho e maneira de identificar os protĂłtipos. A primeira delas utiliza como relação de adjacĂŞncia o grafo completo e identifica os protĂłtipos nas regiões de fronteira entre as classes, os quais oferecem caminhos de custo Ăłtimo que sĂŁo computados como sendo o valor do maior peso de arco do caminho entre esses protĂłtipos e as demais amostras da base de dados, sendo o peso do arco entre duas amostras dado pela distância entre seus vetores de caracterĂsticas. O algoritmo OPF tenta minimizar esses custos para todas as amostras. A outra abordagem utiliza como relação de adjacĂŞncia um grafo k-nn e identifica os protĂłtipos como sendo os máximos de uma função de densidade de probabilidade, a qual Ă© computada utilizando os pesos dos arcos. O valor do custo do caminho Ă© dado pelo menor valor de densidade ao longo do caminho. Neste caso, o algoritmo OPF tenta agora maximizar esses custos. Apresentamos tambĂ©m um algoritmo de aprendizado genĂ©rico, o qual ensina o classificador atravĂ©s de seus erros em um conjunto de validação, trocando amostras classificadas incorretamente por outras selecionadas atravĂ©s de certas restrições. Esse processo Ă© repetido at'e um critĂ©rio de erro ser estabelecido. Comparações com os classificadores SVM, ANN-MLP, k-NN e BC foram feitas, tendo o OPF demonstrado ser similar ao SVM, porĂ©m bem mais rápido, e superior aos restantes.Abstract: Patterns are usually represented by feature vectors obtained from samples of a dataset, which can be fully, partially or non labeled. Depending on the amount of available information of these datasets, three kinds of pattern identification techniques can be applied: supervised, semi-supervised or non supervised. In this work, we addressed the supervised ones, which are characterized by the fully knowledge of the labels from the dataset samples, and we also proposed a novel idea for supervised pattern recognition based on Optimum-Path Forest (OPF), which models the pattern recognition problem as a graph, where the nodes are the samples and the arcs are defined by some adjacency relation. The most relevant samples (prototypes) are identified and a competition process between them is started, which try to offer optimum-path costs to the remaining dataset samples. We presented here two approaches, which differ from each other in the adjacency relation, path-cost function and the prototypes identification procedure. The first ones uses as the adjacency relation the complete graph and identify the prototypes in the boundaries of the classes, which offer optimum-path costs that are computed as been the maximum path arc-weight between these prototypes and the other dataset samples, in which the arc-weight is given by the distance between their feature vectors. In this case, the OPF algorithm tries to minimize these costs for each sample of the dataset. The other approach uses as the adjacency relation a k-nn graph and identifies the prototypes as the maxima of a probability density function, which is computed using the arc-weigths. The path-cost value is given by the lowest density value among it. The OPF algorithm now tries to maximize these costs. We also presented a generic learning algorithm, which tries to teach a classifier through its erros in a validation set, replacing the misclassified samples by other selected using some constraints. This process is repeated until an error criterion is satisfied. Comparisons with SVM, ANN-MLP, k-NN and BC classifiers were also performed, being the OPF similar to SVM, but much faster, and superior to the remaining classifiers.DoutoradoMetodologia e Tecnicas da ComputaçãoDoutor em CiĂŞncia da Computaçã
2D and 3D digital shape modelling strategies
Image segmentation of organs in medical images using model-based approaches requires a priori information which is often given by manually tagging landmarks on a training set of shapes. This is a tedious, time-consuming, and error prone task. To overcome some of these drawbacks, several automatic methods were devised. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in Active Shape Modelling, which has encountered several challenges. The most crucial among these are: (C1) defining and characterizing landmarks; (C2) obtaining landmarks at the desired level of detail; (C3) ensuring homology; (C4) generalizing to n>2 dimensions; (C5) achieving practical computations. This thesis proposes several novel modelling techniques attempting to meet C1-C5. In this process, this thesis makes the following key contributions: the concept of local scale for shapes; the idea of allowing level of detail for selecting landmarks; the concept of equalization of shape variance for selecting landmarks; the idea of recursively subdividing shapes and letting the sub-shapes guide landmark selection, which is a very general n-dimensional strategy; the idea of virtual landmarks, which may be situated anywhere relative to, not necessarily on, the shape boundary; a new compactness measure that considers both the number of landmarks and the number of modes selected as independent variables.
The first of three methods uses the c-scale shape descriptor, based on the new concept of curvature-scale, to automatically locate mathematical landmarks on the mean of the training shapes. The landmarks are propagated to the training shapes to establish correspondence among shapes. Since all shapes of the same family do not necessarily present exactly the same shape features, another novel method was devised that takes into account the real shape variability existing in the training set and that is guided by the strategy of equalization of the variance observed in the training set for selecting landmarks. By incorporating the above basic concepts into modelling, a third family of methods with numerous possibilities was developed, taking into account shape features, and the variability among shapes, while being easily generalized to the 3D space. Its output is multi-resolutional allowing landmark selection at any lower resolution trivially as a subset of those found at a higher resolution. The best strategy to use within the family will have to be determined according to the clinical application at hand.
All methods were evaluated in terms of compactness on two data sets - 40 CT images of the liver and 40 MR images of the talus bone of the foot. Further, numerous artificial shapes with known salient points were also used for testing the accuracy of the proposed methods. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. Besides, the accuracy (in terms of false positives and negatives and the location of landmarks) of the proposed shape descriptor on artificial shapes is considerably superior to a state-of-the-art scale space approach to finding salient points on shapes
Development of an MRI Template and Analysis Pipeline for the Spinal Cord and Application in Patients with Spinal Cord Injury
La moelle épinière est un organe fondamental du corps humain. Étant le lien entre le cerveau et le
système nerveux périphérique, endommager la moelle épinière, que ce soit suite à un trauma ou
une maladie neurodégénérative, a des conséquences graves sur la qualité de vie des patients. En
effet, les maladies et traumatismes touchant la moelle épinière peuvent affecter l’intégrité des
neurones et provoquer des troubles neurologiques et/ou des handicaps fonctionnels. Bien que de
nombreuses voies thérapeutiques pour traiter les lésions de la moelle épinière existent, la
connaissance de l’étendue des dégâts causés par ces lésions est primordiale pour améliorer
l’efficacité de leur traitement et les décisions cliniques associées. L’imagerie par résonance
magnétique (IRM) a démontré un grand potentiel pour le diagnostic et pronostic des maladies
neurodégénératives et traumas de la moelle épinière. Plus particulièrement, l’analyse par template
de données IRM du cerveau, couplée à des outils de traitement d’images automatisés, a permis une
meilleure compréhension des mécanismes sous-jacents de maladies comme l’Alzheimer et la
Sclérose en Plaques. Extraire automatiquement des informations pertinentes d’images IRM au sein
de régions spécifiques de la moelle épinière présente toutefois de plus grands défis que dans le
cerveau. Il n’existe en effet qu’un nombre limité de template de la moelle épinière dans la
littérature, et aucun ne couvre toute la moelle épinière ou n’est lié à un template existant du cerveau.
Ce manque de template et d’outils automatisés rend difficile la tenue de larges études d’analyse de
la moelle épinière sur des populations variées.
L’objectif de ce projet est donc de proposer un nouveau template IRM couvrant toute la moelle
épinière, recalé avec un template existant du cerveau, et intégrant des atlas de la structure interne
de la moelle épinière (e.g., matière blanche et grise, tracts de la matière blanche). Ce template doit
venir avec une série d’outils automatisés permettant l’extraction d’information IRM au sein de
régions spécifiques de la moelle épinière. La question générale de recherche de ce projet est donc
« Comment créer un template générique de la moelle épinière, qui permettrait l’analyse non
biaisée et reproductible de données IRM de la moelle épinière ? ». Plusieurs contributions
originales ont été proposées pour répondre à cette question et vont être décrites dans les prochains
paragraphes.
La première contribution de ce projet est le développement du logiciel Spinal Cord Toolbox (SCT).
SCT est un logiciel open-source de traitement d’images IRM multi-parametrique de la moelle
épinière (De Leener, Lévy, et al., 2016). Ce logiciel intègre notamment des outils pour la détection
et la segmentation automatique de la moelle épinière et de sa structure interne (i.e., matière blanche
et matière grise), l’identification et la labellisation des niveaux vertébraux, le recalage d’images
IRM multimodales sur un template générique de la moelle épinière (précédemment le template
MNI-Poly-AMU, maintenant le template PAM50, proposé içi). En se basant sur un atlas de la
moelle, SCT intègre également des outils pour extraire des données IRM de régions spécifiques de
la moelle épinière, comme la matière blanche et grise et les tracts de la matière blanche, ainsi que
sur des niveaux vertébraux spécifiques. D’autres outils additionnels ont aussi été proposés, comme
des outils de correction de mouvement et de traitement basiques d’images appliqués le long de la
moelle épinière. Chaque outil intégré à SCT a été validé sur un jeu de données multimodales.
La deuxième contribution de ce projet est le développement d’une nouvelle méthode de recalage
d’images IRM de la moelle épinière (De Leener, Mangeat, et al., 2017). Cette méthode a été
développée pour un usage particulier : le redressement d’images IRM de la moelle épinière, mais
peut également être utilisé pour recaler plusieurs images de la moelle épinière entre elles, tout en
tenant compte de la distribution vertébrale de chaque sujet. La méthode proposée se base sur une
approximation globale de la courbure de la moelle épinière dans l’espace et sur la résolution
analytique des champs de déformation entre les deux images. La validation de cette nouvelle
méthode a été réalisée sur une population de sujets sains et de patients touchés par une compression
de la moelle épinière.
La contribution majeure de ce projet est le développement d’un système de création de template
IRM de la moelle épinière et la proposition du template PAM50 comme template de référence pour
les études d’analyse par template de données IRM de la moelle épinière. Le template PAM50 a été
créé à partir d’images IRM tiré de 50 sujets sains, et a été généré en utilisant le redressement
d’images présenté ci-dessus et une méthode de recalage d’images itératif non linéaire, après
plusieurs étapes de prétraitement d’images. Ces étapes de prétraitement incluent la segmentation
automatique de la moelle épinière, l’extraction manuelle du bord antérieur du tronc cérébral, la
détection et l’identification des disques intervertébraux, et la normalisation d’intensité le long de
la moelle. Suite au prétraitement, la ligne centrale moyenne de la moelle et la distribution vertébrale
ont été calculées sur la population entière de sujets et une image initiale de template a été générée.
Après avoir recalé toutes les images sur ce template initial, le template PAM50 a été créé en
utilisant un processus itératif de recalage d’image, utilisé pour générer des templates de cerveau.
Le PAM50 couvre le tronc cérébral et la moelle épinière en entier, est disponible pour les contrastes
IRM pondérés en T1, T2 et T2*, et intègre des cartes probabilistes et atlas de la structure interne
de la moelle épinière. De plus, le PAM50 a été recalé sur le template ICBM152 du cerveau,
permettant ainsi la tenue d’analyse par template simultanément dans le cerveau et dans la moelle
épinière.
Finalement, plusieurs résultats complémentaires ont été présentés dans cette dissertation.
Premièrement, une étude de validation de la répétabilité et reproductibilité de mesures de l’aire de
section de la moelle épinière a été menée sur une population de patients touchés par la sclérose en
plaques. Les résultats démontrent une haute fiabilité des mesures ainsi que la possibilité de détecter
des changements très subtiles de l’aire de section transverse de la moelle, importants pour mesurer
l’atrophie de la moelle épinière précoce due à des maladies neurodégénératives comme la sclérose
en plaques. Deuxièmement, un nouveau biomarqueur IRM des lésions de la moelle épinière a été
proposé, en collaboration avec Allan Martin, de l’Université de Toronto. Ce biomarqueur, calculé
à partir du ratio d’intensité entre la matière blanche et grise sur des images IRM pondérées en T2*,
utilise directement les développements proposés dans ce projet, notamment en utilisant le recalage
du template de la moelle épinière et les méthodes de segmentation de la moelle. La faisabilité
d’extraire des mesures de données IRM multiparamétrique dans des régions spécifiques de la
moelle épinière a également été démontrée, permettant d’améliorer le diagnostic et pronostic de
lésions et compression de la moelle épinière. Finalement, une nouvelle méthode d’extraction de la
morphométrie de la moelle épinière a été proposée et utilisée sur une population de patients touchés
par une compression asymptomatique de la moelle épinière, démontrant de grandes capacités de
diagnostic (> 99%).
Le développement du template PAM50 comble le manque de template de la moelle épinière dans
la littérature mais présente cependant plusieurs limitations. En effet, le template proposé se base
sur une population de 50 sujets sains et jeunes (âge moyen = 27 +- 6.5) et est donc biaisée vers
cette population particulière. Adapter les analyses par template pour un autre type de population
(âge, race ou maladie différente) peut être réalisé directement sur les méthodes d’analyse mais aussi
sur le template en lui-même. Tous le code pour générer le template a en effet été mis en ligne
(https://github.com/neuropoly/template) pour permettre à tout groupe de recherche de développer
son propre template. Une autre limitation de ce projet est le choix d’un système de coordonnées
basé sur la position des vertèbres. En effet, les vertèbres ne représentent pas complètement le
caractère fonctionnel de la moelle épinière, à cause de la différence entre les niveaux vertébraux et
spinaux. Le développement d’un système de coordonnées spinal, bien que difficile à caractériser
dans des images IRM, serait plus approprié pour l’analyse fonctionnelle de la moelle épinière.
Finalement, il existe encore de nombreux défis pour automatiser l’ensemble des outils développés
dans ce projet et les rendre robuste pour la majorité des contrastes et champs de vue utilisés en
IRM conventionnel et clinique.
Ce projet a présenté plusieurs développements importants pour l’analyse de données IRM de la
moelle épinière. De nombreuses améliorations du travail présenté sont cependant requises pour
amener ces outils dans un contexte clinique et pour permettre d’améliorer notre compréhension des
maladies affectant la moelle épinière. Les applications cliniques requièrent notamment
l’amélioration de la robustesse et de l’automatisation des méthodes d’analyse d’images proposées.
La caractérisation de la structure interne de la moelle épinière, incluant la matière blanche et la
matière grise, présente en effet de grands défis, compte tenu de la qualité et la résolution des images
IRM standard acquises en clinique. Les outils développés et validés au cours de ce projet ont un
grand potentiel pour la compréhension et la caractérisation des maladies affectant la moelle
épinière et aura un impact significatif sur la communauté de la neuroimagerie.----------ABSTRACT
The spinal cord plays a fundamental role in the human body, as part of the central nervous system
and being the vector between the brain and the peripheral nervous system. Damaging the spinal
cord, through traumatic injuries or neurodegenerative diseases, can significantly affect the quality
of life of patients. Indeed, spinal cord injuries and diseases can affect the integrity of neurons, and
induce neurological impairments and/or functional disabilities. While various treatment procedures
exist, assessing the extent of damages and understanding the underlying mechanisms of diseases
would improve treatment efficiency and clinical decisions. Over the last decades, magnetic
resonance imaging (MRI) has demonstrated a high potential for the diagnosis and prognosis of
spinal cord injury and neurodegenerative diseases. Particularly, template-based analysis of brain
MRI data has been very helpful for the understanding of neurological diseases, using automated
analysis of large groups of patients. However, extracting MRI information within specific regions
of the spinal cord with minimum bias and using automated tools is still a challenge. Indeed, only a
limited number of MRI template of the spinal cord exists, and none covers the full spinal cord,
thereby preventing large multi-centric template-based analysis of the spinal cord. Moreover, no
template integrates both the spinal cord and the brain region, thereby preventing simultaneous
cerebrospinal studies.
The objective of this project was to propose a new MRI template of the full spinal cord, which
allows simultaneous brain and spinal cord studies, that integrates atlases of the spinal cord internal
structures (e.g., white and gray matter, white matter pathways) and that comes with tools for
extracting information within these subregions. More particularly, the general research question of
the project was “How to create generic MRI templates of the spinal cord that would enable
unbiased and reproducible template-based analysis of spinal cord MRI data?”. Several original
contributions have been made to answer this question and to enable template-based analysis of
spinal cord MRI data.
The first contribution was the development of the Spinal Cord Toolbox (SCT), a comprehensive
and open-source software for processing multi-parametric MRI data of the spinal cord (De Leener,
LĂ©vy, et al., 2016). SCT includes tools for the automatic segmentation of the spinal cord and its
internal structure (white and gray matter), vertebral labeling, registration of multimodal MRI data
(structural and non-structural) on a spinal cord MRI template (initially the MNI-Poly-AMU
template, later the PAM50 template), co-registration of spinal cord MRI images, as well as the
robust extraction of MRI metric within specific regions of the spinal cord (i.e., white and gray
matter, white matter tracts, gray matter subregions) and specific vertebral levels using a spinal cord
atlas (LĂ©vy et al., 2015). Additional tools include robust motion correction and image processing
along the spinal cord. Each tool included in SCT has been validated on a multimodal dataset.
The second contribution of this project was the development of a novel registration method
dedicated to spinal cord images, with an interest in the straightening of the spinal cord, while
preserving its topology (De Leener, Mangeat et al., 2017). This method is based on the global
approximation of the spinal cord and the analytical computation of deformation fields
perpendicular to the centerline. Validation included calculation of distance measurements after
straightening on a population of healthy subjects and patients with spinal cord compression.
The major contribution of this project was the development of a framework for generating MRI
template of the spinal cord and the PAM50 template, an unbiased and symmetrical MRI template
of the brainstem and full spinal cord. Based on 50 healthy subjects, the PAM50 template was
generated using an iterative nonlinear registration process, after applying normalization and
straightening of all images. Pre-processing included segmentation of the spinal cord, manual
delineation of the brainstem anterior edge, detection and identification of intervertebral disks, and
normalization of intensity along the spinal cord. Next, the average centerline and vertebral
distribution was computed to create an initial straight template space. Then, all images were
registered to the initial template space and an iterative nonlinear registration framework was
applied to create the final symmetrical template. The PAM50 covers the brainstem and the full
spinal cord, from C1 to L2, is available for T1-, T2- and T2*-weighted contrasts, and includes
probabilistic maps of the white and the gray matter and atlases of the white matter pathways and
gray matter subregions. Additionally, the PAM50 template has been merged with the ICBM152
brain template, thereby allowing for simultaneous cerebrospinal template-based analysis.
Finally, several complementary results, focused on clinical validation and applications, are
presented. First, a reproducibility and repeatability study of cross-sectional area measurements
using SCT (De Leener, Granberg, Fink, Stikov, & Cohen-Adad, 2017) was performed on a
Multiple Sclerosis population (n=9). The results demonstrated the high reproducibility and
repeatability of SCT and its ability to detect very subtle atrophy of the spinal cord. Second, a novel
biomarker of spinal cord injury has been proposed. Based on the T2*-weighted intensity ratio
between the white and the gray matter, this new biomarker is computed by registering MRI images
with the PAM50 template and extracting metrics using probabilistic atlases. Additionally, the
feasibility of extracting multiparametric MRI metrics from subregions of the spinal cord has been
demonstrated and the diagnostic potential of this approach has been assessed on a degenerative
cervical myelopathy (DCM) population. Finally, a method for extracting shape morphometrics
along the spinal cord has been proposed, including spinal cord flattening, indentation and torsion.
These metrics demonstrated high capabilities for the diagnostic of asymptomatic spinal cord
compression (AUC=99.8% for flattening, 99.3% for indentation, and 98.4% for torsion).
The development of the PAM50 template enables unbiased template-based analysis of the spinal
cord. However, the PAM50 template has several limitations. Indeed, the proposed template has
been generated with multimodal MRI images from 50 healthy and young individuals (age = 27+/-
6.5 y.o.). Therefore, the template is specific to this particular population and could not be directly
usable for age- or disease-specific populations. One solution is to open-source the templategeneration
code so that research groups can generate and use their own spinal cord MRI template.
The code is available on https://github.com/neuropoly/template. While this project introduced a
generic referential coordinate system, based on vertebral levels and the pontomedullary junction
as origin, one limitation is the choice of this coordinate system. Another coordinate system, based
spinal segments would be more suitable for functional analysis. However, the acquisition of MRI
images with high enough resolution to delineate the spinal roots is still challenging. Finally, several
challenges in the automation of spinal cord MRI processing remains, including the robust detection
and identification of vertebral levels, particularly in case of small fields-of-view.
This project introduced key developments for the analysis of spinal cord MRI data. Many more
developments are still required to bring them into clinics and to improve our understanding of
diseases affecting the spinal cord. Indeed, clinical applications require the improvement of the
robustness and the automation of the proposed processing and analysis tools. Particularly, the
detection and segmentation of spinal cord structures, including vertebral labeling and white/gray
matter segmentation, is still challenging, given the lowest quality and resolution of standard clinical
MRI acquisition. The tools developed and validated here have the potential to improve our understanding and the characterization of diseases affecting the spinal cord and will have a significant impact on the neuroimaging community
User-centered design and evaluation of interactive segmentation methods for medical images
Segmentation of medical images is a challenging task that aims to identify a particular structure present on the image. Among the existing methods involving the user at different levels, from a fully-manual to a fully-automated task, interactive segmentation methods provide assistance to the user during the task to reduce the variability in the results and allow occasional corrections of segmentation failures. Therefore, they offer a compromise between the segmentation efficiency and the accuracy of the results. It is the user who judges whether the results are satisfactory and how to correct them during the segmentation, making the process subject to human factors. Despite the strong influence of the user on the outcomes of a segmentation task, the impact of such factors has received little attention, with the literature focusing the assessment of segmentation processes on computational performance. Yet, involving the user performance in the analysis is more representative of a realistic scenario. Our goal is to explore the user behaviour in order to improve the efficiency of interactive image segmentation processes. This is achieved through three contributions. First, we developed a method which is based on a new user interaction mechanism to provide hints as to where to concentrate the computations. This significantly improves the computation efficiency without sacrificing the quality of the segmentation. The benefits of using such hints are twofold: (i) because our contribution is based on user interaction, it generalizes to a wide range of segmentation methods, and (ii) it gives comprehensive indications about where to focus the segmentation search. The latter advantage is used to achieve the second contribution. We developed an automated method based on a multi-scale strategy to: (i) reduce the user’s workload and, (ii) improve the computational time up to tenfold, allowing real-time segmentation feedback. Third, we have investigated the effects of such improvements in computations on the user’s performance. We report an experiment that manipulates the delay induced by the computation time while performing an interactive segmentation task. Results reveal that the influence of this delay can be significantly reduced with an appropriate interaction mechanism design. In conclusion, this project provides an effective image segmentation solution that has been developed in compliance with user performance requirements. We validated our approach through multiple user studies that provided a step forward into understanding the user behaviour during interactive image segmentation
THE ROLE OF JAVANESE CULTURE IN CHARACTER BUILDING AT ELEMENTARY SCHOOL
Nowadays, character education becomes a major concern in Indonesia. Character development has been done
by various strategy, but the results is yet to be seen. Character development should beginin elementary school
in order that the children's charactercould formed early so that it could be developed until they are mature.
One of the efforts of character building is integrating the local wisdom in learning. One of them is the
Javanese culture. Javanese culture has a variety of rules called the "unggah-ungguh" that always give good
models to the public community, especially to the Javanese. Along with the times, the Javanese culture that
upholds ethics began to degraded and replaced by foreign cultures that came later. The parents’ roles in
instilling the Javanese culture to their children also decreased gradually. This paper will examine the Javanese
culture’s roles toward the character building in elementary schools’ students. Descriptive method supported by
a depth review of the literature and the previous studies is used in this paper as a method. Based on the results
of these reviews, we obtain some information about the types and mechanisms of Javanese culture in character
building of students, especially elementary school students