8 research outputs found

    Robust Cardiac Motion Estimation using Ultrafast Ultrasound Data: A Low-Rank-Topology-Preserving Approach

    Get PDF
    Cardiac motion estimation is an important diagnostic tool to detect heart diseases and it has been explored with modalities such as MRI and conventional ultrasound (US) sequences. US cardiac motion estimation still presents challenges because of the complex motion patterns and the presence of noise. In this work, we propose a novel approach to estimate the cardiac motion using ultrafast ultrasound data. -- Our solution is based on a variational formulation characterized by the L2-regularized class. The displacement is represented by a lattice of b-splines and we ensure robustness by applying a maximum likelihood type estimator. While this is an important part of our solution, the main highlight of this paper is to combine a low-rank data representation with topology preservation. Low-rank data representation (achieved by finding the k-dominant singular values of a Casorati Matrix arranged from the data sequence) speeds up the global solution and achieves noise reduction. On the other hand, topology preservation (achieved by monitoring the Jacobian determinant) allows to radically rule out distortions while carefully controlling the size of allowed expansions and contractions. Our variational approach is carried out on a realistic dataset as well as on a simulated one. We demonstrate how our proposed variational solution deals with complex deformations through careful numerical experiments. While maintaining the accuracy of the solution, the low-rank preprocessing is shown to speed up the convergence of the variational problem. Beyond cardiac motion estimation, our approach is promising for the analysis of other organs that experience motion.Comment: 15 pages, 10 figures, Physics in Medicine and Biology, 201

    Constrained parameterization with applications to graphics and image processing.

    Get PDF
    Surface parameterization is to establish a transformation that maps the points on a surface to a specified parametric domain. It has been widely applied to computer graphics and image processing fields. The challenging issue is that the usual positional constraints always result in triangle flipping in parameterizations (also called foldovers). Additionally, distortion is inevitable in parameterizations. Thus the rigid constraint is always taken into account. In general, the constraints are application-dependent. This thesis thus focuses on the various constraints depended on applications and investigates the foldover-free constrained parameterization approaches individually. Such constraints usually include, simple positional constraints, tradeoff of positional constraints and rigid constraint, and rigid constraint. From the perspective of applications, we aim at the foldover-free parameterization methods with positional constraints, the as-rigid-as-possible parameterization with positional constraints, and the well-shaped well-spaced pre-processing procedure for low-distortion parameterizations in this thesis. The first contribution of this thesis is the development of a RBF-based re-parameterization algorithm for the application of the foldover-free constrained texture mapping. The basic idea is to split the usual parameterization procedure into two steps, 2D parameterization with the constraints of convex boundaries and 2D re-parameterization with the interior positional constraints. Moreover, we further extend the 2D re-parameterization approach with the interior positional constraints to high dimensional datasets, such as, volume data and polyhedrons. The second contribution is the development of a vector field based deformation algorithm for 2D mesh deformation and image warping. Many presented deformation approaches are used to employ the basis functions (including our proposed RBF-based re-parameterization algorithm here). The main problem is that such algorithms have infinite support, that is, any local deformation always leads to small changes over the whole domain. Our presented vector field based algorithm can effectively carry on the local deformation while reducing distortion as much as possible. The third contribution is the development of a pre-processing for surface parameterization. Except the developable surfaces, the current parameterization approaches inevitably incur large distortion. To reduce distortion, we proposed a pre-processing procedure in this thesis, including mesh partition and mesh smoothing. As a result, the resulting meshes are partitioned into a set of small patches with rectangle-like boundaries. Moreover, they are well-shaped and well-spaced. This pre-processing procedure can evidently improve the quality of meshes for low-distortion parameterizations

    Development of acquisition system and algorithms for registration towards modeling displacement and deformation of the contour on the digital image

    Get PDF
    Centralna tema ovog rada je primena sistema za akviziciju slike u cilju procene i modelovanja deformacija i pomeranja objekata koji su snimljeni. Glavna metoda koja je pri tom koriơćena je metoda registracije slika. Sam postupak registracije podrazumeva skup algoritama i metoda kojim se vrĆĄi pronalaĆŸenje transformacije koja preslikava prostor jedne slike u prostor druge. Ukoliko se radi o slikama istog objekta u različitim poloĆŸajima ili konfiguracijama moguće je odrediti pomeranja i deformacije ĆŸeljene tačke poznavanjem ove transformacije. U radu su opisani već postojeći algoritmi, sa svojim najznačajnijim svojstvima. Na bazi ovih osobina razvijen je metod registracije baziran na reĆĄavanju Laplasove jednačine za elektrostatičko polje. Ovakav pristup je moguć zahvaljujući činjenici da gradijent deformacija odgovara linijama elektrostatičkog polja, koje je dobijeno reĆĄavanjem Laplasove jednačine i zadovoljava sva bitna svojstva koja treba da ima registraciona transformacija. Ove osobine se odnose na glatkost polja deformacije, postojanje inverzne funkcije i zabranu ukrĆĄtanja linija polja. Sam postupak reĆĄavanja navedene jednačine i određivanje traĆŸene transformaicje sproveden je primenom metode konačnih elemenata pri čemu je koriơćena formulacija minimuma energija sistema. Jedna od inspiracija za rad na metodama registracije slike bio je i problem procene mehaničkih karakteristika tkiva aorte sa aneurizmom. U radu je opisana realizacija i način rada sistema koji je iskoriơćen za karakterizaciju mehaničkih svojstava aorte, koji kao izlazne podatke daje informaciju o pomeranjima skupa tačaka tkiva kao i o vrednostima pritiska fluida koji izaziva ta pomeranja. Deformacije su procenjene primenom metoda segmentacije slike i izdvajanja ivica nakon čega je primenjen metod registracije slike kojom je određena deformacija tačaka tkiva u određenim vremenskim trenucima. Na osnovu ovih vrednosti primenom genetskog algoritma određena je vrednost Jangovog modula tkiva pri čemu je koriơćen mehanički model deformacije tkiva. Analiza hoda upotrebom slika hoda je takođe jedan od izazova kada je u pitanju neinvazivna dijagnostika i praćenje stanja dijagnostifiko- vanih kao i zdravih subjekata. U ovom radu je prikazan postupak određivanja mehaničkog naprezanja hrskavice primenom slika snimljenih kamerom i vrednostima sile normalne reakcije podloge koja nastaje tokom hoda. Za procenu deformacija hrskavice koriơćeni su algoritmi registracije slike između slika dobijenih sa kamere i slika dobijenih kompjuterizovanom tomografijom. Postupkom optimizacije procenjeni su i mehanički parametri hrskavice (Jangov modul i Poasonov koeficijent).The main aim of this thesis is the application of image acquisition system for the purpose of assessing and modeling the deformation and displacement of the objects acquired in digital images. The technique used in the study is method of image registration. The procedure of the registration includes a set of algorithms and methods which performs the assessment of transformation that maps the space of one image to another one. If there are images of the same object in different positions or configurations it is possible to determine the displacement and deformation of the desired point of understanding this transformation. The thesis describes the existing algorithms, along with their most important properties. The novel algorithms for image registration is developed based of solving the Laplace equation for electrostatic field. This approach is possible due to the fact that the transformation which corresponds to the deformation gradient field lines of the electrostatic field, which is obtained by solving the Laplace equation satisfies all essential features that should have the registration transformation. These properties are related to the smoothness of the deformation field, the existence of an inverse function of the prohibition of crossing the line field. The procedure for solving the above equation and determining the required transformation was conducted using finite element method with use of a formulation of minimum energy of the system. The motivation for this thesis was consideration problem of evaluation mechanical properties of tissues affected aortic aneurysm. The paper describes the implementation and operation of the system that was used to characterize the mechanical properties of the aorta, which as output data provides information about a set of deformation points on the tissue surface as well as the values of applied fluid pressure. Strains at the certain moment of time were estimated using the image segmentation method and edges extraction, and finally image registration is applied. Using strain values in the mechanical model of tissue, and genetic algorithm as optimization technique, the Young's modulus is assessment. Gait analysis based on the images data is also one of the challenges in non-invasive diagnosis and monitoring of both diagnosed patients and healthy subjects.. This thesis presents a method for determining the mechanical stress of the cartilage using the camera image, and the values of the normal ground reaction force, which is generated during the walk, for assessment of cartilage deformation algorithms were used image registration of images obtained from the camera and the images obtained by computed tomography. Mechanical parameters of cartilage (Young's modulus and Poisson's ratio) are evaluated in the optimization process

    Estimating and understanding motion : from diagnostic to robotic surgery

    Get PDF
    Estimating and understanding motion from an image sequence is a central topic in computer vision. The high interest in this topic is because we are living in a world where many events that occur in the environment are dynamic. This makes motion estimation and understanding a natural component and a key factor in a widespread of applications including object recognition , 3D shape reconstruction, autonomous navigation and medica! diagnosis. Particularly, we focus on the medical domain in which understanding the human body for clinical purposes requires retrieving the organs' complex motion patterns, which is in general a hard problem when using only image data. In this thesis, we cope with this problem by posing the question - How to achieve a realistic motion estimation to offer a better clinical understanding? We focus this thesis on answering this question by using a variational formulation as a basis to understand one of the most complex motions in the human's body, the heart motion, through three different applications: (i) cardiac motion estimation for diagnostic, (ii) force estimation and (iii) motion prediction, both for robotic surgery. Firstly, we focus on a central topic in cardiac imaging that is the estimation of the cardiac motion. The main aim is to offer objective and understandable measures to physicians for helping them in the diagnostic of cardiovascular diseases. We employ ultrafast ultrasound data and tools for imaging motion drawn from diverse areas such as low-rank analysis and variational deformation to perform a realistic cardiac motion estimation. The significance is that by taking low-rank data with carefully chosen penalization, synergies in this complex variational problem can be created. We demonstrate how our proposed solution deals with complex deformations through careful numerical experiments using realistic and simulated data. We then move from diagnostic to robotic surgeries where surgeons perform delicate procedures remotely through robotic manipulators without directly interacting with the patients. As a result, they lack force feedback, which is an important primary sense for increasing surgeon-patient transparency and avoiding injuries and high mental workload. To solve this problem, we follow the conservation principies of continuum mechanics in which it is clear that the change in shape of an elastic object is directly proportional to the force applied. Thus, we create a variational framework to acquire the deformation that the tissues undergo due to an applied force. Then, this information is used in a learning system to find the nonlinear relationship between the given data and the applied force. We carried out experiments with in-vivo and ex-vivo data and combined statistical, graphical and perceptual analyses to demonstrate the strength of our solution. Finally, we explore robotic cardiac surgery, which allows carrying out complex procedures including Off-Pump Coronary Artery Bypass Grafting (OPCABG). This procedure avoids the associated complications of using Cardiopulmonary Bypass (CPB) since the heart is not arrested while performing the surgery on a beating heart. Thus, surgeons have to deal with a dynamic target that compromisetheir dexterity and the surgery's precision. To compensate the heart motion, we propase a solution composed of three elements: an energy function to estimate the 3D heart motion, a specular highlight detection strategy and a prediction approach for increasing the robustness of the solution. We conduct evaluation of our solution using phantom and realistic datasets. We conclude the thesis by reporting our findings on these three applications and highlight the dependency between motion estimation and motion understanding at any dynamic event, particularly in clinical scenarios.L’estimaciĂł i comprensiĂł del moviment dins d’una seqĂŒĂšncia d’imatges Ă©s un tema central en la visiĂł per ordinador, el que genera un gran interĂšs perquĂš vivim en un entorn ple d’esdeveniments dinĂ mics. Per aquest motiu Ă©s considerat com un component natural i factor clau dins d’un ampli ventall d’aplicacions, el qual inclou el reconeixement d’objectes, la reconstrucciĂł de formes tridimensionals, la navegaciĂł autĂČnoma i el diagnĂČstic de malalties. En particular, ens situem en l’àmbit mĂšdic en el qual la comprensiĂł del cos humĂ , amb finalitats clĂ­niques, requereix l’obtenciĂł de patrons complexos de moviment dels ĂČrgans. Aquesta Ă©s, en general, una tasca difĂ­cil quan s’utilitzen nomĂ©s dades de tipus visual. En aquesta tesi afrontem el problema plantejant-nos la pregunta - Com es pot aconseguir una estimaciĂł realista del moviment amb l’objectiu d’oferir una millor comprensiĂł clĂ­nica? La tesi se centra en la resposta mitjançant l’Ășs d’una formulaciĂł variacional com a base per entendre un dels moviments mĂ©s complexos del cos humĂ , el del cor, a travĂ©s de tres aplicacions: (i) estimaciĂł del moviment cardĂ­ac per al diagnĂČstic, (ii) estimaciĂł de forces i (iii) predicciĂł del moviment, orientant-se les dues Ășltimes en cirurgia robĂČtica. En primer lloc, ens centrem en un tema principal en la imatge cardĂ­aca, que Ă©s l’estimaciĂł del moviment cardĂ­ac. L’objectiu principal Ă©s oferir als metges mesures objectives i comprensibles per ajudar-los en el diagnĂČstic de les malalties cardiovasculars. Fem servir dades d’ultrasons ultrarĂ pids i eines per al moviment d’imatges procedents de diverses Ă rees, com ara l’anĂ lisi de baix rang i la deformaciĂł variacional, per fer una estimaciĂł realista del moviment cardĂ­ac. La importĂ ncia rau en que, en prendre les dades de baix rang amb una penalitzaciĂł acurada, es poden crear sinergies en aquest problema variacional complex. Mitjançant acurats experiments numĂšrics, amb dades realĂ­stiques i simulades, hem demostrat com les nostres propostes solucionen deformacions complexes. DesprĂ©s passem del diagnĂČstic a la cirurgia robĂČtica, on els cirurgians realitzen procediments delicats remotament, a travĂ©s de manipuladors robĂČtics, sense interactuar directament amb els pacients. Com a conseqĂŒĂšncia, no tenen la percepciĂł de la força com a resposta, que Ă©s un sentit primari important per augmentar la transparĂšncia entre el cirurgiĂ  i el pacient, per evitar lesions i per reduir la cĂ rrega de treball mental. Resolem aquest problema seguint els principis de conservaciĂł de la mecĂ nica del medi continu, en els quals estĂ  clar que el canvi en la forma d’un objecte elĂ stic Ă©s directament proporcional a la força aplicada. Per aixĂČ hem creat un marc variacional que adquireix la deformaciĂł que pateixen els teixits per l’aplicaciĂł d’una força. Aquesta informaciĂł s’utilitza en un sistema d’aprenentatge, per trobar la relaciĂł no lineal entre les dades donades i la força aplicada. Hem dut a terme experiments amb dades in-vivo i ex-vivo i hem combinat l’anĂ lisi estadĂ­stic, grĂ fic i de percepciĂł que demostren la robustesa de la nostra soluciĂł. Finalment, explorem la cirurgia cardĂ­aca robĂČtica, la qual cosa permet realitzar procediments complexos, incloent la cirurgia coronĂ ria sense bomba (off-pump coronary artery bypass grafting o OPCAB). Aquest procediment evita les complicacions associades a l’Ășs de circulaciĂł extracorpĂČria (Cardiopulmonary Bypass o CPB), ja que el cor no s’atura mentre es realitza la cirurgia. AixĂČ comporta que els cirurgians han de tractar amb un objectiu dinĂ mic que compromet la seva destresa i la precisiĂł de la cirurgia. Per compensar el moviment del cor, proposem una soluciĂł composta de tres elements: un funcional d’energia per estimar el moviment tridimensional del cor, una estratĂšgia de detecciĂł de les reflexions especulars i una aproximaciĂł basada en mĂštodes de predicciĂł, per tal d’augmentar la robustesa de la soluciĂł. L’avaluaciĂł de la nostra soluciĂł s’ha dut a terme mitjançant conjunts de dades sintĂštiques i realistes. La tesi conclou informant dels nostres resultats en aquestes tres aplicacions i posant de relleu la dependĂšncia entre l’estimaciĂł i la comprensiĂł del moviment en qualsevol esdeveniment dinĂ mic, especialment en escenaris clĂ­nics.Postprint (published version

    Recalage non rigide et segmentation automatique d'images de perfusion du foie

    Get PDF
    Contexte médicale -- Revue de littérature -- Recalage avec contrainte d'incompressibilité -- Segmentation basée sur le recalage de grandes déformations -- Cadre de travail unifié et efficace -- Discussion générale

    Automatisation et optimisation de l’analyse de donnĂ©es de rĂ©sonance magnĂ©tique pour les cerveaux de rongeur en dĂ©veloppement

    Get PDF
    Les lĂ©sions de la matiĂšre blanche parfois observĂ©es chez les enfants prĂ©maturĂ©s peuvent avoir des consĂ©quences lourdes sur le dĂ©veloppement cognitif, comportemental et social de l’enfant. Il est important de rĂ©agir tĂŽt pour Ă©viter des consĂ©quences irrĂ©mĂ©diables. Malheureusement, Ă  l’heure actuelle, la capacitĂ© d’un traitement Ă  protĂ©ger les habilitĂ©s cognitives ou comportementales ne peut ĂȘtre Ă©valuĂ©e qu’à un stade dĂ©veloppemental avancĂ©, et il est alors gĂ©nĂ©ralement trop tard pour un traitement alternatif. L’établissement de biomarqueurs qui corrĂšlent avec l’issue neuroÂŹdĂ©veloppementale et qui permettent d’évaluer en phase aiguĂ« l’effet du traitement serait trĂšs bĂ©nĂ©fique. À cet effet, l’imagerie par rĂ©sonance magnĂ©tique (IRM) est un outil de choix. Son caractĂšre non-invasif permet d’étudier sans risques additionnels cette population sensible. Le prĂ©sent mĂ©moire Ă©value la capacitĂ© des technologies de rĂ©sonance magnĂ©tique Ă  dĂ©tecter les lĂ©sions diffuses de la matiĂšre blanche dans un modĂšle animal. Ce modĂšle animal reproduit les lĂ©sions observĂ©es chez l’humain prĂ©maturĂ© en induisant une rĂ©action inflammatoire par l’injection de lipopolysaccharides (LPS) directement dans le cerveau au troisiĂšme jour postnatal (P3). L’objectif principal du travail prĂ©sentĂ© est le dĂ©veloppement d’outils opĂ©rateur-indĂ©pendants et optimaux pour l’analyse des donnĂ©es de rĂ©sonance magnĂ©tique du raton. En particulier, ces outils sont conçus pour l’analyse des donnĂ©es d’imagerie du tenseur de diffusion (DTI) et pour l’analyse de donnĂ©es de spectroscopie (MRS). Ces outils sont ensuite appliquĂ©s Ă  la caractĂ©risation du modĂšle animal via l’analyse de deux jeux de donnĂ©es. Le premier est constituĂ© de donnĂ©es DTI acquises Ă  P24 ex vivo sur deux groupes, un sham (tĂ©moin) et un ayant subi une injection de LPS. Le second comprend des donnĂ©es MRS et DTI acquises en phase aiguĂ« de la rĂ©action inflammatoire (P4) in vivo dans trois groupes : sham, LPS et un troisiĂšme ayant reçu l’injection de LPS et un traitement neuroprotecteur par l’injection de l’antagoniste recombinant de l’IL1-ß (IL1-Ra), une cytokine pro-inflammatoire. Il existe diffĂ©rentes approches Ă  l’analyse des donnĂ©es DTI : par rĂ©gion d’intĂ©rĂȘt, par histogrammes, par tractographie et par comparaisons voxel-par-voxel. Principalement, dans ce mĂ©moire, deux mĂ©thodes voxel-par-voxel ont Ă©tĂ© Ă©tudiĂ©es : "Voxel-Based Analysis" ou VBA et "Tract-Based Spatial Statistics" ou TBSS. VBA compare les populations en calculant une statistique pour chaque point de l’espace. TBSS, dĂ©veloppĂ© pour rĂ©pondre Ă  certaines limitations de VBA, compare les deux groupes en conduisant les tests sur un sous-ensemble des voxels de la matiĂšre blanche, le squelette de matiĂšre blanche. Ces deux mĂ©thodes reposent sur une Ă©tape prĂ©liminaire majeure : la normalisation spatiale. La normalisation permet de s’assurer, jusqu’à un certain degrĂ©, que pour l’ensemble de la population les voxels comparĂ©s correspondent Ă  une mĂȘme rĂ©gion anatomique. Pour ce projet, trois mĂ©thodes de normalisation spatiale utilisant chacune un algorithme de recalage diffĂ©rent ont Ă©tĂ© implĂ©mentĂ©es : Symmetric Group-Wise Normalization avec l’algorithme Symmetric Normalization (SyN, de la suite Advanced Normalization Tools ou ANTs), la normalisation spatiale non-biaisĂ©e du module Diffusion Tensor Toolkit (DTI-TK) et une normalisation spatiale sur base du sujet le plus reprĂ©sentatif de la population avec l’algorithme FMRIB Non-linear Image Registration Tool (FNIRT, de la suite FMRIB Software Library ou FSL). L’analyse automatique des donnĂ©es de diffusion peut donc se faire via diffĂ©rentes combinaisons de normalisation spatiale (ANTs, DTI-TK, FSL) et d’analyse voxel-par-voxel (VBA, TBSS). Il est gĂ©nĂ©ralement difficile de dĂ©terminer la meilleure combinaison et il n’y a pas de principes Ă©tablis pour guider ce choix. Ceci est Ă©tudiĂ© dans l’article « Near equivalence of three automated diffusion tensor analysis pipelines in a neonate rat model of periventricular leukomalacia » oĂč chacune des normalisations spatiales implĂ©mentĂ©es est testĂ©e en combinaison avec VBA et TBSS. L’étude dĂ©montre que les rĂ©sultats sont trĂšs cohĂ©rents entre les diffĂ©rentes approches mais met en Ă©vidence des limitations de VBA et TBSS. Les rĂ©sultats suggĂšrent qu’appliquer la normalisation DTI-TK en combinaison avec TBSS permet une analyse plus robuste, du moins pour les ratons et dans le cadre de ce modĂšle animal. Des modules d’analyse par histogramme et de parcellisation automatique de la matiĂšre blanche sous-corticale ont Ă©galement Ă©tĂ© implĂ©mentĂ©s et testĂ©s. L’analyse des donnĂ©es de spectroscopie est plus directe, dans le sens oĂč les paramĂštres de l’analyse sont moins influents sur le rĂ©sultat. De ce fait, un pipeline unique et opĂ©rateurÂŹindĂ©pendant a Ă©tĂ© implĂ©mentĂ©, incorporant le prĂ©traitement des donnĂ©es et la quantification des mĂ©tabolites Ă  l’aide du Linear Combination Model (LCModel). Ces pipelines DTI/MRS ont Ă©tĂ© appliquĂ©s Ă  l’étude du modĂšle animal et ont permis de dĂ©montrer la sensibilitĂ© des technologies de rĂ©sonance magnĂ©tique Ă  ce type de lĂ©sion. En effet, l’analyse des donnĂ©es de diffusion ex vivo a soulignĂ© une lĂ©sion persistante et diffuse de la matiĂšre blanche sous-corticale du cĂŽtĂ© ipsilatĂ©ral. En phase aiguĂ« de l’inflammation, les donnĂ©es de diffusion in vivo indiquent une forte diminution de la diffusivitĂ© radiale et axiale. La spectroscopie a Ă©galement permis de mettre en Ă©vidence des changements mĂ©taboliques avec notamment une rĂ©duction de N-acetylaspartate, glutamate, phosphorylethanolamine et une augmentation de lipides et de macromolĂ©cules. Le traitement Ă  l’IL1-Ra a permis de modĂ©rer les changements observĂ©s en DTI et en MRS. En conclusion, diffĂ©rents outils « Ă©tat-de-l’art » relatifs Ă  l’analyse de donnĂ©es DTI et MRS ont Ă©tĂ© dĂ©veloppĂ©s et appliquĂ©s avec succĂšs Ă  l’étude d’un modĂšle animal des lĂ©sions de la matiĂšre blanche de l’enfant prĂ©maturĂ©. Les rĂ©sultats permettent de considĂ©rer la DTI et la MRS comme technologies prometteuses pour la caractĂ©risation et le suivi de ce type de lĂ©sion, celles-ci Ă©tant sensibles Ă  la lĂ©sion en phase aiguĂ« de la rĂ©action inflammatoire ainsi qu’à un stade dĂ©veloppemental plus avancĂ©. Cependant, afin de permettre une interprĂ©tation solide des changements observĂ©s, il est nĂ©cessaire de confronter les observations IRM Ă  d’autres mĂ©thodes d’imagerie telles que l’immuno-histologie, la microscopie Ă©lectronique ou encore l’optique par cohĂ©rence tomographique.----------ABSTRACT White matter injuries observed in the preterm infant may have heavy consequences on the cognitive, behavioral and social development of the child and it is imperative to act early in order to avoid definitive repercussions. Unfortunately, for now, the efficacy of neuroprotective treatments can only be assessed at an advanced developmental stage when it is already too late to experiment with an alternative treatment. Finding biomarkers that correlate with the neuroÂŹdevelopmental outcome and allow to assess the efficacy of the treatment at an early stage would be greatly beneficial. Magnetic resonance imaging (MRI) is a prominent technology with potential for establishing quantitative biomarkers. Moreover, its non-invasive nature allows to study this sensitive population without additional risks. This thesis assesses the use of MRI technologies for the study of diffuse white matter injury in an animal model. This animal model reproduces the lesions observed in human preterms by inducing an inflammatory reaction in the neonate rat brain by an injection of lipopolysaccharides (LPS) at postnatal day 3 (P3). The main goal of this project is to develop user-independent and optimal tools for the analysis of magnetic resonance data of the rat pup brain. Specifically, these tools are designed for the analysis of diffusion tensor imaging (DTI) and magnetic resonance spectroscopy (MRS) data. The secondary goal is to apply these tools to the study of the animal model of white matter injury. The data are made up of two different sets. The first one is constituted of DTI data only acquired ex vivo at P24 on two different groups: one sham and one that underwent an LPS injection. The second set of data comprises both DTI and MRS data. Data were acquired in vivo at the acute phase of injury (P4) on three different groups: sham, LPS and a third group that was injected with LPS and received a neuroprotective treatment by administration of the recombinant antagonist of IL1§ (IL1-Ra), a pro-inflammatory cytokine. There are several ways to conduct the analysis of DTI data: by region of interest, by histograms, by tractography or by conducting voxelwise comparisons. Primarily, in this thesis, two voxelwise methods were studied: “Voxel-Based Analysis” or VBA and “Tract-Based Spatial Statistics” or TBSS. VBA compares populations by computing a statistic at every voxel of the image. TBSS, which has been developed to alleviate some limitations of VBA, runs the statistical tests on a subset of voxels in the white matter, the white matter skeleton. Both these methods strongly rely on a specific image processing step: the spatial normalization. The normalization ensures, to a certain extent, that the voxels correspond to a same anatomical region across the subjects. Here, three normalization approaches, each using a different registration algorithm, were implemented: Symmetric Group-Wise Normalization using the Symmetric Normalization (SyN) algorithm of the Advanced Normalization Tools (ANTs) toolbox; the unbiased normalization of the Diffusion Tensor Toolkit (DTI-TK); and a normalization based on the population most representative subject using FMRIB Non-linear Image Registration Tool (FNIRT) algorithm of the FMRIB Software Library (FSL). Automatic diffusion data analysis can therefore be performed using combinations of a certain spatial normalization (ANTs, DTI-TK, FSL) and a voxelwise analysis (VBA, TBSS). Determining the best combination is not straight-forward and there are no principled ways to choose one combination over another. This was studied in the submitted paper “Near equivalence of three automated diffusion tensor analysis pipelines in a neonate rat model of periventricular leukomalacia” in which each of the implemented normalization methods were tested in combination with VBA and TBSS. Results demonstrate great coherence among the tested pipelines but also underlines both VBA and TBSS limitations. The study also suggests that, for the rat pup data of this animal model, combining DTI-TK normalization with TBSS might yield a more robust analysis. Other analysis modules implemented for the study of DTI data include analysis by histogram and by automatic parcelling of sub-cortical white matter. Magnetic spectroscopy data analysis does not depend as strongly to the processing pipeline as diffusion data. Therefore, a unique and user-independent pipeline was implemented. This pipeline incorporates data preprocessing operations and automatic metabolites quantification using the Linear Combination Model (LCModel). These pipelines were applied to the study of the animal model and the results demonstrated that magnetic resonance technologies are sensitive to these injuries. The ex vivo diffusion data exhibited a persistent and diffuse injury of the sub-cortical white matter on the ipsilateral side. At the acute phase, the in vivo diffusion data showed a strong decrease of axial and radial diffusivities. The spectroscopy data also underlined metabolic perturbations with essentially a decrease of N-acetylaspartate, glutamate, phosphorylethanolamine and an increase of lipids and macromolecules. The IL1-Ra neuroprotective treatment seemed effective and moderated the amplitude of these changes in both DTI and MRS. In conclusion, various state-of-the-art analysis tools for DTI and MRS data were developed and successfully applied to the study of an animal model of diffuse white matter injury of the preterm baby. Results indicate that DTI and MRS are potential tools for the characterisation and monitoring of this pathology, these being sensitive to the injury in the acute and chronic stages. However, in order to further strengthen the interpretation of these results, it is necessary that these be supported by other imaging technologies such as immuno-histology, electronic microscopy or optical coherence tomography
    corecore