14 research outputs found
Deep Multimodality Image-Guided System for Assisting Neurosurgery
Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden.
Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt.
Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte.
Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist.
Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet.
Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann.
Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann.
Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt.
In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse
Kohdennusohjelman optimointi pään magneetti- ja tietokonetomografiakuville
In this thesis work, the aim was to find a robust, optimal rigid registration process to accurately and automatically align computed tomography (CT) and magnetic resonance (MR) images of the brain. For patients undergoing, for example, stereoelectroencephalography (epilepsy patients) or implantation of stimulating electrodes in the brain (Parkinson’s patients), it is crucial to be able to combine information from low-dose CT and MR with great precision.
Registration was performed with SimpleITK interface to the image registration framework of the United States National Library of Medicine Insight Segmentation and Registration Toolkit (ITK). In the optimization process an existing SimpleITK example was used as a basis for the registration algorithm, which was then optimized one block at a time beginning with the initial alignment. Registration accuracy was determined by comparing the automatic transform of our registration algorithm to the transform of a semiautomatic registration performed with a semiautomatic ITK based software, ipcWorkstation, which is used and developed in HUS Medical Imaging Center. As a result, a robust rigid registration algorithm was developed. The maximum registration errors with the final algorithm were less than 2 mm for 7 out of 15 and less than 4 mm for 12 out of 15 patients. The algorithm performs registration up to initial rotations of 45 degrees.
The fast development of automated registration algorithm presented in this thesis appears promising to be used for other applications as well. This kind of block-wise optimization pattern could be used to optimize the registration either for images of other parts of the body or for other imaging modalities such as positron emission tomography (PET) and MR.Tämän diplomityön tarkoituksena oli löytää optimaalinen ja automaattinen tietokonetomografia- ja magneettikuvien kohdennusmenetelmä. Kohdennus suoritettiin käyttäen hyväksi SimpleITK-ohjelmakirjastoa, joka perustuu ITK kuvakohdennus ohjelmakirjastoon (engl. the United States National Library of Medicine Insight Segmentation and Registration Toolkit). Optimointi aloitettiin SimpleITKesimerkin pohjalta, jonka parametreja optimoitiin osa kerrallaan lähtien liikkeelle kohdennuksen alustuksesta. Kohdennustarkkuus määritettiin vertaamalla optimoidulla kohdennusohjelmalla saatua automaattista muunnosmatriisia puoliautomaattisella menetelmällä saatuun muunnosmatriisiin. Puoliautomaattinen muunnos tehtiin HUS-Kuvantamisessa kehitetyllä ipcWorkstation-ohjelmalla, joka myös perustuu ITK-ohjelmakirjastoon. Työn tuloksena saatiin luotettavasti toimiva jäykän kuvakohdennuksen suorittava algoritmi, joka pohjautuu SimpleITK:n Python-kirjastoon. Seitsemällä 15 potilaasta suurin kohdennusvirhe oli alle 2 mm ja 12:lla 15 potilaasta alle 4 mm. Kohdennus onnistuu jopa 45 asteen lähtökohtaisilla kulmaeroilla.
Työssä käytettyä nopeaa algoritmikehitystekniikkaa voitaisiin käyttää optimointiin muillekin sovelluksille. Tulevaisuudessa algoritmioptimointia osa kerrallaan voisi hyödyntää kohdennusparametrien optimointiin jonkin muun vartalon alueen rakenteellisten kuvien kohdennukseen tai eri kuvamodaliteettien kohdennukseen kuten positroniemissiotomografia- ja magneettikuvien kohdennukseen
Recommended from our members
Multimodal neuroimaging computing: the workflows, methods, and platforms
The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms
Quantification of longitudinal tumor changes using PET imaging in 3D Slicer
Quantitative assessment of Positron Emission Tomography (PET) imaging can be used for diagnosis and staging of tumors and monitoring of response in cancer treatment. In clinical practice, PET analysis is based on normalized indices such as those based on the Standardized Uptake Value (SUV). Although largely evaluated, these indices are considered quite unstable mainly because of the simplicity of their experimental protocol. Development and validation of more sophisticated methods for the purposes of clinical research require a common open platform that can be used both for prototyping and sharing of the analysis methods, and for their evaluation by clinical users. This work was motivated by the lack of such platform for longitudinal quantitative PET analysis. By following a prototype driven software development approach, an open source tool for quantitative analysis of tumor changes based on multi-study PET image data has been implemented. As a platform for this work, 3D Slicer 4, a free open source software application for medical image computing has been chosen. For the analysis and quantification of PET data, the implemented software tool guides the user through a series of workflow steps. In addition to the implementation of a guided workflow, the software was made extensible by integration of interfaces for the enhancement of segmentation and PET quantification algorithms. By offering extensibility, the PET analysis software tool was transformed into a platform suitable for prototyping and development of PET-specific segmentation and quantification methods. The accuracy, efficiency and usability of the platform were evaluated in reproducibility and usability studies. The results achieved in these studies demonstrate that the implemented longitudinal PET analysis software tool fulfills all requirements for the basic quantification of tumors in PET imaging and at the same time provides an efficient and easy to use workflow. Furthermore, it can function as a platform for prototyping of PET-specific segmentation and quantification methods, which in the future can be incorporated in the workflow
Integrative multimodal image analysis using physical models for characterization of brain tumors in radiotherapy
Therapy failure with subsequent tumor progress is a common problem in radiotherapy of
high grade glioma. Definition of treatment volumes with CT and MRI is limited due to
uncertainties concerning tumor outlines. The goal of the presented work was to enable
assessment of tumor physiology and prediction of progression patterns using multi-modal
image analysis and thus, improve target delineation. Physiological imaging modalities, such
as 18F-FET PET, diffusion and perfusion MRI were used to predict recurrence patterns.
The Medical Imaging Interaction ToolKit together with own software implementation
enabled side-by-side evaluation of all image modalities. These included tools for PET
analysis and a module for voxel wise fitting of dynamic data with pharmacokinetic models.
Robustness and accuracy of parameter estimates were studied on synthetic perfusion data.
Parameter feasibility for progression prediction was investigated on DCE MRI and 18F-FET
PET data. Using the developed software tools, a pipeline for prediction of tumor progression
patterns based on multi-modal image classification with a random forest machine
learning algorithm was established. Exemplary prediction analysis was applied on a small
patient set for illustration of workflow functionality and classification results
Interactive Training System for Medical Ultrasound
Ultrasound is an effective imaging modality because it is safe, unobtrusive and portable. However, it is also very operator-dependent and significant skill is required to capture quality images and properly detect abnormalities. Training is an important part of ultrasound, but the limited availability of training courses presents a significant hindrance to the use of ultrasound being used in additional settings. The goal of this work was to design and implement an interactive training system to help train and evaluate sonographers. The Interactive Training System for Medical Ultrasound is an inexpensive, software-based training system in which the trainee scans a lifelike manikin with a sham transducer containing a 6 degree of freedom tracking sensor. The observed ultrasound image is generated from a pre-stored 3D image volume and is controlled interactively by the sham transducer\u27s position and orientation. Based on the selected 3D volume, the manikin may represent normal anatomy, exhibit a specific trauma or present a given physical condition. The training system provides a realistic scanning experience by providing an interactive real-time display with adjustable image parameters such as scan depth, gain, and time gain compensation. A representative hardware interface has been developed including a lifelike manikin and convincing sham transducers, along with a touch screen user interface. Methods of capturing 3D ultrasound image volumes and stitching together multiple volumes have been evaluated. System performance was analyzed and an initial clinical evaluation was performed. This thesis presents a complete prototype training system with advanced simulation and learning assessment features. The ultrasound training system can provide cost-effective and convenient training of physicians and sonographers. This system is an innovative approach to training and is a powerful tool for training sonographers in recognizing a wide variety of medical conditions
Análise funcional do ventrĂculo esquerdo em angio-TC coronária
Doutoramento em Engenharia InformáticaCoronary CT angiography is widely used in clinical practice for the assessment
of coronary artery disease. Several studies have shown that the same exam
can also be used to assess left ventricle (LV) function. LV function is usually
evaluated using just the data from end-systolic and end-diastolic phases even
though coronary CT angiography (CTA) provides data concerning multiple
cardiac phases, along the cardiac cycle. This unused wealth of data, mostly
due to its complexity and the lack of proper tools, has still to be explored in
order to assess if further insight is possible regarding regional LV functional
analysis. Furthermore, different parameters can be computed to characterize
LV function and while some are well known by clinicians others still need to be
evaluated concerning their value in clinical scenarios.
The work presented in this thesis covers two steps towards extended use of
CTA data: LV segmentation and functional analysis.
A new semi-automatic segmentation method is presented to obtain LV data for
all cardiac phases available in a CTA exam and a 3D editing tool was designed
to allow users to fine tune the segmentations. Regarding segmentation
evaluation, a methodology is proposed in order to help choose the similarity
metrics to be used to compare segmentations. This methodology allows the
detection of redundant measures that can be discarded. The evaluation was
performed with the help of three experienced radiographers yielding low intraand
inter-observer variability.
In order to allow exploring the segmented data, several parameters
characterizing global and regional LV function are computed for the available
cardiac phases. The data thus obtained is shown using a set of visualizations
allowing synchronized visual exploration. The main purpose is to provide
means for clinicians to explore the data and gather insight over their meaning,
as well as their correlation with each other and with diagnosis outcomes.
Finally, an interactive method is proposed to help clinicians assess myocardial
perfusion by providing automatic assignment of lesions, detected by clinicians,
to a myocardial segment. This new approach has obtained positive feedback
from clinicians and is not only an improvement over their current assessment
method but also an important first step towards systematic validation of
automatic myocardial perfusion assessment measures.A angiografia coronária por TC (angio-TC) Ă© prática clĂnica corrente para a
avaliação de doença coronária. Alguns estudos mostram que é também
possĂvel utilizar o exame de angio-TC para avaliar a função do ventrĂculo
esquerdo (VE). A função ventricular esquerda (FVE) é normalmente avaliada
considerando as fases de fim de sĂstole e de fim de diástole, apesar de a
angio-TC proporcionar dados relativos a diferentes fases distribuĂdas ao longo
do ciclo cardĂaco. Estes dados nĂŁo considerados, devido Ă sua complexidade
e Ă falta de ferramentas apropriadas para o efeito, tĂŞm ainda de ser explorados
para que se perceba se possibilitam uma melhor compreensĂŁo da FVE. Para
além disso, podem ser calculados diferentes parâmetros para caracterizar a
FVE e, enquanto alguns são bem conhecidos dos médicos, outros requerem
ainda uma avaliação do seu valor clĂnico.
No âmbito de uma utilização alargada dos dados proporcionados pelos angio-
TC, este trabalho apresenta contributos ao nĂvel da segmentação do VE e da
sua análise funcional.
É proposto um método semi-automático para a segmentação do VE de forma a
obter dados para as diferentes fases cardĂacas presentes no exame de angio-
TC. Foi também desenvolvida uma ferramenta de edição 3D que permite aos
utilizadores a correcção das segmentações assim obtidas. Para a avaliação do
método de segmentação apresentado foi proposta uma metodologia que
permite a detecção de medidas de similaridade redundantes, a usar no âmbito
da avaliação para comparação entre segmentações, para que tais medidas
redundantes possam ser descartadas. A avaliação foi executada com a
colaboração de três técnicos de radiologia experientes, tendo-se verificado
uma baixa variabilidade intra- e inter-observador.
De forma a permitir explorar os dados segmentados, foram calculados vários
parâmetros para caracterização global e regional da FVE, para as diversas
fases cardĂacas disponĂveis. Os resultados assim obtidos sĂŁo apresentados
usando um conjunto de visualizações que permitem uma exploração visual
sincronizada dos mesmos. O principal objectivo é proporcionar ao médico a
exploração dos resultados obtidos para os diferentes parâmetros, de modo a
que este tenha uma compreensĂŁo acrescida sobre o seu significado clĂnico,
assim como sobre a correlação existente entre diferentes parâmetros e entre
estes e o diagnĂłstico.
Finalmente, foi proposto um método interactivo para ajudar os médicos durante
a avaliação da perfusão do miocárdio, que atribui automaticamente as lesões
detectadas pelo médico ao respectivo segmento do miocárdio. Este novo
método obteve uma boa receptividade e constitui não só uma melhoria em
relação ao método tradicional mas é também um primeiro passo para a
validação sistemática de medidas automáticas da perfusão do miocárdio
MITK-IGT fĂĽr die computerassistierte Weichgewebepunktion
Im Bereich der Krebsdiagnose und -therapie gewinnen neue minimalinvasive Verfahren zunehmend an Bedeutung. Beispiele hierfür sind Nadelpunktionen, bei denen zur Diagnose eine Gewebsprobe entnommen (Biopsie) oder durch Zerstörung des Gewebes im Bereich der Nadelspitze eine Krebserkrankung therapiert wird (Ablation). Eine zentrale Herausforderung hierbei ist die genaue Platzierung der Nadel. Am deutschen Krebsforschungszentrum (DKFZ) wurde ein computergestütztes Navigationssystem für Nadelinsertionen entwickelt, das sich im in-vivo Versuch als höchst akkurat zeigte. Trotz der vielversprechenden Ergebnisse kam das System bisher jedoch nicht am Patienten zum Einsatz. Dies ist unter anderem auf die schwierige Integration des Systems in den klinischen Workflow und die erhöhte Invasivität zurückzuführen. Vor diesem Hintergrund war das Ziel dieser Arbeit zum einen die Entwicklung einer flexiblen, erweiterbare Software für die navigierte Weichgewebepunktion, zum anderen die Weiterentwicklung des Navigationssystems durch die Einbindung eines neuen Feldgenerators für das elektromagnetische Trackingsystem NDI Aurora. Die Implementierung der Software erfolgte aufbauend auf der Bibliothek MITK und dem enthaltenen Modul MITK-IGT. Dabei wurde ein komponentenweiser Aufbau umgesetzt, welcher einen einfachen Austausch oder Erweiterungen der einzelnen Komponenten ermöglicht. Des Weiteren wurde der neue Feldgenerator bezüglich Genauigkeit und Präzision in der Einsatzumgebung evaluiert und es erfolgte ein Test des Navigationssystems unter klinischen Bedingungen. Abschließend kann festgestellt werden, dass durch die gezeigte Flexibilität und Erweiterbarkeit der entwickelten Software zahlreiche Möglichkeiten zur Weiterentwicklung offen stehen. Bezüglich des Feldgenerators zeigte sich das vielversprechende Potential dieses Geräts für die Weiterentwicklung medizinischer Navigationssysteme
Algorithm Selection in Multimodal Medical Image Registration
Medical image acquisition technology has improved significantly throughout the last several decades, and clinicians now rely on medical images to diagnose illnesses, and to determine treatment protocols, and surgical planning. Medical images have been divided by researchers into two types of structures: functional and anatomical. Anatomical imaging, such as magnetic resonance imaging (MRI), computed tomography imaging (C.T.), ultrasound, and other systems, enables medical personnel to examine a body internally with great accuracy, thereby avoiding the risks associated with exploratory surgery. Functional (or physiological) imaging systems contain single-photon emission computed tomography (SPECT), positron emission tomography (PET), and other methods, which refer to a medical imaging system for discovering or evaluating variations in absorption, blood flow, metabolism, and regional chemical composition. Notably, one of these medical imaging models alone cannot usually supply doctors with adequate information. Additionally, data obtained from several images of the same subject generally provide complementary information via a process called medical image registration. Image registration may be defined as the process of geometrically mapping one -image’s coordinate system to the coordinate system of another image acquired from a different perspective and with a different sensor. Registration performs a crucial role in medical image assessment because it helps clinicians observe the developing trend of the disease and make proper measures accordingly. Medical image registration (MIR) has several applications: radiation therapy, tumour diagnosis and recognition, template atlas application, and surgical guidance system. There are two types of registration: manual registration and registration-based computer system. Manual registration is when the radiologist /physician completes all registration tasks interactively with visual feedback provided by the computer system, which can result in serious problems. For instance, investigations conducted by two experts are not identical, and registration correctness is determined by the user's assessment of the relationship between anatomical features. Furthermore, it may take a long time for the user to achieve proper alignment, and the outcomes vary according to the user. As a result, the outcomes of manual alignment are doubtful and unreliable. The second registration approach is computer-based multimodal medical image registration that targets various medical images, and an arraof application types. . Additionally, automatic registration in medical pictures matches the standard recognized characteristics or voxels in pre- and intra-operative imaging without user input. Registration of multimodal pictures is the initial step in integrating data from several images. Automatic image processing has emerged to mitigate (Husein, do you mean “mitigate” or “improve”?) the manual image registration reliability, robustness, accuracy, and processing time. While such registration algorithms offer advantages when applied to some medical images, their use with others is accompanied by disadvantages. No registration technique can outperform all input datasets due to the changeability of medical imaging and the diverse demands of applications. However, no algorithm is preferable under all possible conditions; given many available algorithms, choosing the one that adapts the best to the task is vital. The essential factor is to choose which method is most appropriate for the situation. The Algorithm Selection Problem has emerged in numerous research disciplines, including medical diagnosis, machine learning, optimization, and computations. The choice of the most powerful strategy for a particular issue seeks to minimize these issues. This study delivers a universal and practical framework for multimodal registration algorithm choice. The primary goal of this study is to introduce a generic structure for constructing a medical image registration system capable of selecting the best registration process from a range of registration algorithms for various used datasets. Three strategies were constructed to examine the framework that was created. The first strategy is based on transforming the problem of algorithm selection into a classification problem. The second strategy investigates the effect of various parameters, such as optimization control points, on the optimal selection. The third strategy establishes a framework for choosing the optimal registration algorithm for a delivered dataset based on two primary criteria: registration algorithm applicability, and performance measures. The approach mentioned in this section has relied on machine learning methods and artificial neural networks to determine which candidate is most promising. Several experiments and scenarios have been conducted, and the results reveal that the novel Framework strategy leads to achieving the best performance, such as high accuracy, reliability, robustness, efficiency, and low processing time
Development of an MRI Template and Analysis Pipeline for the Spinal Cord and Application in Patients with Spinal Cord Injury
La moelle épinière est un organe fondamental du corps humain. Étant le lien entre le cerveau et le
système nerveux périphérique, endommager la moelle épinière, que ce soit suite à un trauma ou
une maladie neurodégénérative, a des conséquences graves sur la qualité de vie des patients. En
effet, les maladies et traumatismes touchant la moelle épinière peuvent affecter l’intégrité des
neurones et provoquer des troubles neurologiques et/ou des handicaps fonctionnels. Bien que de
nombreuses voies thérapeutiques pour traiter les lésions de la moelle épinière existent, la
connaissance de l’étendue des dégâts causés par ces lésions est primordiale pour améliorer
l’efficacité de leur traitement et les décisions cliniques associées. L’imagerie par résonance
magnétique (IRM) a démontré un grand potentiel pour le diagnostic et pronostic des maladies
neurodégénératives et traumas de la moelle épinière. Plus particulièrement, l’analyse par template
de données IRM du cerveau, couplée à des outils de traitement d’images automatisés, a permis une
meilleure compréhension des mécanismes sous-jacents de maladies comme l’Alzheimer et la
Sclérose en Plaques. Extraire automatiquement des informations pertinentes d’images IRM au sein
de régions spécifiques de la moelle épinière présente toutefois de plus grands défis que dans le
cerveau. Il n’existe en effet qu’un nombre limité de template de la moelle épinière dans la
littérature, et aucun ne couvre toute la moelle épinière ou n’est lié à un template existant du cerveau.
Ce manque de template et d’outils automatisés rend difficile la tenue de larges études d’analyse de
la moelle épinière sur des populations variées.
L’objectif de ce projet est donc de proposer un nouveau template IRM couvrant toute la moelle
épinière, recalé avec un template existant du cerveau, et intégrant des atlas de la structure interne
de la moelle épinière (e.g., matière blanche et grise, tracts de la matière blanche). Ce template doit
venir avec une série d’outils automatisés permettant l’extraction d’information IRM au sein de
régions spécifiques de la moelle épinière. La question générale de recherche de ce projet est donc
« Comment créer un template générique de la moelle épinière, qui permettrait l’analyse non
biaisée et reproductible de données IRM de la moelle épinière ? ». Plusieurs contributions
originales ont été proposées pour répondre à cette question et vont être décrites dans les prochains
paragraphes.
La première contribution de ce projet est le développement du logiciel Spinal Cord Toolbox (SCT).
SCT est un logiciel open-source de traitement d’images IRM multi-parametrique de la moelle
épinière (De Leener, Lévy, et al., 2016). Ce logiciel intègre notamment des outils pour la détection
et la segmentation automatique de la moelle épinière et de sa structure interne (i.e., matière blanche
et matière grise), l’identification et la labellisation des niveaux vertébraux, le recalage d’images
IRM multimodales sur un template générique de la moelle épinière (précédemment le template
MNI-Poly-AMU, maintenant le template PAM50, proposé içi). En se basant sur un atlas de la
moelle, SCT intègre également des outils pour extraire des données IRM de régions spécifiques de
la moelle épinière, comme la matière blanche et grise et les tracts de la matière blanche, ainsi que
sur des niveaux vertébraux spécifiques. D’autres outils additionnels ont aussi été proposés, comme
des outils de correction de mouvement et de traitement basiques d’images appliqués le long de la
moelle épinière. Chaque outil intégré à SCT a été validé sur un jeu de données multimodales.
La deuxième contribution de ce projet est le développement d’une nouvelle méthode de recalage
d’images IRM de la moelle épinière (De Leener, Mangeat, et al., 2017). Cette méthode a été
développée pour un usage particulier : le redressement d’images IRM de la moelle épinière, mais
peut également être utilisé pour recaler plusieurs images de la moelle épinière entre elles, tout en
tenant compte de la distribution vertébrale de chaque sujet. La méthode proposée se base sur une
approximation globale de la courbure de la moelle épinière dans l’espace et sur la résolution
analytique des champs de déformation entre les deux images. La validation de cette nouvelle
méthode a été réalisée sur une population de sujets sains et de patients touchés par une compression
de la moelle épinière.
La contribution majeure de ce projet est le développement d’un système de création de template
IRM de la moelle épinière et la proposition du template PAM50 comme template de référence pour
les études d’analyse par template de données IRM de la moelle épinière. Le template PAM50 a été
créé à partir d’images IRM tiré de 50 sujets sains, et a été généré en utilisant le redressement
d’images présenté ci-dessus et une méthode de recalage d’images itératif non linéaire, après
plusieurs étapes de prétraitement d’images. Ces étapes de prétraitement incluent la segmentation
automatique de la moelle épinière, l’extraction manuelle du bord antérieur du tronc cérébral, la
détection et l’identification des disques intervertébraux, et la normalisation d’intensité le long de
la moelle. Suite au prétraitement, la ligne centrale moyenne de la moelle et la distribution vertébrale
ont été calculées sur la population entière de sujets et une image initiale de template a été générée.
Après avoir recalé toutes les images sur ce template initial, le template PAM50 a été créé en
utilisant un processus itératif de recalage d’image, utilisé pour générer des templates de cerveau.
Le PAM50 couvre le tronc cérébral et la moelle épinière en entier, est disponible pour les contrastes
IRM pondérés en T1, T2 et T2*, et intègre des cartes probabilistes et atlas de la structure interne
de la moelle épinière. De plus, le PAM50 a été recalé sur le template ICBM152 du cerveau,
permettant ainsi la tenue d’analyse par template simultanément dans le cerveau et dans la moelle
épinière.
Finalement, plusieurs résultats complémentaires ont été présentés dans cette dissertation.
Premièrement, une étude de validation de la répétabilité et reproductibilité de mesures de l’aire de
section de la moelle épinière a été menée sur une population de patients touchés par la sclérose en
plaques. Les résultats démontrent une haute fiabilité des mesures ainsi que la possibilité de détecter
des changements très subtiles de l’aire de section transverse de la moelle, importants pour mesurer
l’atrophie de la moelle épinière précoce due à des maladies neurodégénératives comme la sclérose
en plaques. Deuxièmement, un nouveau biomarqueur IRM des lésions de la moelle épinière a été
proposé, en collaboration avec Allan Martin, de l’Université de Toronto. Ce biomarqueur, calculé
à partir du ratio d’intensité entre la matière blanche et grise sur des images IRM pondérées en T2*,
utilise directement les développements proposés dans ce projet, notamment en utilisant le recalage
du template de la moelle épinière et les méthodes de segmentation de la moelle. La faisabilité
d’extraire des mesures de données IRM multiparamétrique dans des régions spécifiques de la
moelle épinière a également été démontrée, permettant d’améliorer le diagnostic et pronostic de
lésions et compression de la moelle épinière. Finalement, une nouvelle méthode d’extraction de la
morphométrie de la moelle épinière a été proposée et utilisée sur une population de patients touchés
par une compression asymptomatique de la moelle épinière, démontrant de grandes capacités de
diagnostic (> 99%).
Le développement du template PAM50 comble le manque de template de la moelle épinière dans
la littérature mais présente cependant plusieurs limitations. En effet, le template proposé se base
sur une population de 50 sujets sains et jeunes (âge moyen = 27 +- 6.5) et est donc biaisée vers
cette population particulière. Adapter les analyses par template pour un autre type de population
(âge, race ou maladie différente) peut être réalisé directement sur les méthodes d’analyse mais aussi
sur le template en lui-même. Tous le code pour générer le template a en effet été mis en ligne
(https://github.com/neuropoly/template) pour permettre à tout groupe de recherche de développer
son propre template. Une autre limitation de ce projet est le choix d’un système de coordonnées
basé sur la position des vertèbres. En effet, les vertèbres ne représentent pas complètement le
caractère fonctionnel de la moelle épinière, à cause de la différence entre les niveaux vertébraux et
spinaux. Le développement d’un système de coordonnées spinal, bien que difficile à caractériser
dans des images IRM, serait plus approprié pour l’analyse fonctionnelle de la moelle épinière.
Finalement, il existe encore de nombreux défis pour automatiser l’ensemble des outils développés
dans ce projet et les rendre robuste pour la majorité des contrastes et champs de vue utilisés en
IRM conventionnel et clinique.
Ce projet a présenté plusieurs développements importants pour l’analyse de données IRM de la
moelle épinière. De nombreuses améliorations du travail présenté sont cependant requises pour
amener ces outils dans un contexte clinique et pour permettre d’améliorer notre compréhension des
maladies affectant la moelle épinière. Les applications cliniques requièrent notamment
l’amélioration de la robustesse et de l’automatisation des méthodes d’analyse d’images proposées.
La caractérisation de la structure interne de la moelle épinière, incluant la matière blanche et la
matière grise, présente en effet de grands défis, compte tenu de la qualité et la résolution des images
IRM standard acquises en clinique. Les outils développés et validés au cours de ce projet ont un
grand potentiel pour la compréhension et la caractérisation des maladies affectant la moelle
épinière et aura un impact significatif sur la communauté de la neuroimagerie.----------ABSTRACT
The spinal cord plays a fundamental role in the human body, as part of the central nervous system
and being the vector between the brain and the peripheral nervous system. Damaging the spinal
cord, through traumatic injuries or neurodegenerative diseases, can significantly affect the quality
of life of patients. Indeed, spinal cord injuries and diseases can affect the integrity of neurons, and
induce neurological impairments and/or functional disabilities. While various treatment procedures
exist, assessing the extent of damages and understanding the underlying mechanisms of diseases
would improve treatment efficiency and clinical decisions. Over the last decades, magnetic
resonance imaging (MRI) has demonstrated a high potential for the diagnosis and prognosis of
spinal cord injury and neurodegenerative diseases. Particularly, template-based analysis of brain
MRI data has been very helpful for the understanding of neurological diseases, using automated
analysis of large groups of patients. However, extracting MRI information within specific regions
of the spinal cord with minimum bias and using automated tools is still a challenge. Indeed, only a
limited number of MRI template of the spinal cord exists, and none covers the full spinal cord,
thereby preventing large multi-centric template-based analysis of the spinal cord. Moreover, no
template integrates both the spinal cord and the brain region, thereby preventing simultaneous
cerebrospinal studies.
The objective of this project was to propose a new MRI template of the full spinal cord, which
allows simultaneous brain and spinal cord studies, that integrates atlases of the spinal cord internal
structures (e.g., white and gray matter, white matter pathways) and that comes with tools for
extracting information within these subregions. More particularly, the general research question of
the project was “How to create generic MRI templates of the spinal cord that would enable
unbiased and reproducible template-based analysis of spinal cord MRI data?”. Several original
contributions have been made to answer this question and to enable template-based analysis of
spinal cord MRI data.
The first contribution was the development of the Spinal Cord Toolbox (SCT), a comprehensive
and open-source software for processing multi-parametric MRI data of the spinal cord (De Leener,
LĂ©vy, et al., 2016). SCT includes tools for the automatic segmentation of the spinal cord and its
internal structure (white and gray matter), vertebral labeling, registration of multimodal MRI data
(structural and non-structural) on a spinal cord MRI template (initially the MNI-Poly-AMU
template, later the PAM50 template), co-registration of spinal cord MRI images, as well as the
robust extraction of MRI metric within specific regions of the spinal cord (i.e., white and gray
matter, white matter tracts, gray matter subregions) and specific vertebral levels using a spinal cord
atlas (LĂ©vy et al., 2015). Additional tools include robust motion correction and image processing
along the spinal cord. Each tool included in SCT has been validated on a multimodal dataset.
The second contribution of this project was the development of a novel registration method
dedicated to spinal cord images, with an interest in the straightening of the spinal cord, while
preserving its topology (De Leener, Mangeat et al., 2017). This method is based on the global
approximation of the spinal cord and the analytical computation of deformation fields
perpendicular to the centerline. Validation included calculation of distance measurements after
straightening on a population of healthy subjects and patients with spinal cord compression.
The major contribution of this project was the development of a framework for generating MRI
template of the spinal cord and the PAM50 template, an unbiased and symmetrical MRI template
of the brainstem and full spinal cord. Based on 50 healthy subjects, the PAM50 template was
generated using an iterative nonlinear registration process, after applying normalization and
straightening of all images. Pre-processing included segmentation of the spinal cord, manual
delineation of the brainstem anterior edge, detection and identification of intervertebral disks, and
normalization of intensity along the spinal cord. Next, the average centerline and vertebral
distribution was computed to create an initial straight template space. Then, all images were
registered to the initial template space and an iterative nonlinear registration framework was
applied to create the final symmetrical template. The PAM50 covers the brainstem and the full
spinal cord, from C1 to L2, is available for T1-, T2- and T2*-weighted contrasts, and includes
probabilistic maps of the white and the gray matter and atlases of the white matter pathways and
gray matter subregions. Additionally, the PAM50 template has been merged with the ICBM152
brain template, thereby allowing for simultaneous cerebrospinal template-based analysis.
Finally, several complementary results, focused on clinical validation and applications, are
presented. First, a reproducibility and repeatability study of cross-sectional area measurements
using SCT (De Leener, Granberg, Fink, Stikov, & Cohen-Adad, 2017) was performed on a
Multiple Sclerosis population (n=9). The results demonstrated the high reproducibility and
repeatability of SCT and its ability to detect very subtle atrophy of the spinal cord. Second, a novel
biomarker of spinal cord injury has been proposed. Based on the T2*-weighted intensity ratio
between the white and the gray matter, this new biomarker is computed by registering MRI images
with the PAM50 template and extracting metrics using probabilistic atlases. Additionally, the
feasibility of extracting multiparametric MRI metrics from subregions of the spinal cord has been
demonstrated and the diagnostic potential of this approach has been assessed on a degenerative
cervical myelopathy (DCM) population. Finally, a method for extracting shape morphometrics
along the spinal cord has been proposed, including spinal cord flattening, indentation and torsion.
These metrics demonstrated high capabilities for the diagnostic of asymptomatic spinal cord
compression (AUC=99.8% for flattening, 99.3% for indentation, and 98.4% for torsion).
The development of the PAM50 template enables unbiased template-based analysis of the spinal
cord. However, the PAM50 template has several limitations. Indeed, the proposed template has
been generated with multimodal MRI images from 50 healthy and young individuals (age = 27+/-
6.5 y.o.). Therefore, the template is specific to this particular population and could not be directly
usable for age- or disease-specific populations. One solution is to open-source the templategeneration
code so that research groups can generate and use their own spinal cord MRI template.
The code is available on https://github.com/neuropoly/template. While this project introduced a
generic referential coordinate system, based on vertebral levels and the pontomedullary junction
as origin, one limitation is the choice of this coordinate system. Another coordinate system, based
spinal segments would be more suitable for functional analysis. However, the acquisition of MRI
images with high enough resolution to delineate the spinal roots is still challenging. Finally, several
challenges in the automation of spinal cord MRI processing remains, including the robust detection
and identification of vertebral levels, particularly in case of small fields-of-view.
This project introduced key developments for the analysis of spinal cord MRI data. Many more
developments are still required to bring them into clinics and to improve our understanding of
diseases affecting the spinal cord. Indeed, clinical applications require the improvement of the
robustness and the automation of the proposed processing and analysis tools. Particularly, the
detection and segmentation of spinal cord structures, including vertebral labeling and white/gray
matter segmentation, is still challenging, given the lowest quality and resolution of standard clinical
MRI acquisition. The tools developed and validated here have the potential to improve our understanding and the characterization of diseases affecting the spinal cord and will have a significant impact on the neuroimaging community