104 research outputs found

    Parallel optimization of fiber bundle segmentation for massive tractography datasets

    Full text link
    We present an optimized algorithm that performs automatic classification of white matter fibers based on a multi-subject bundle atlas. We implemented a parallel algorithm that improves upon its previous version in both execution time and memory usage. Our new version uses the local memory of each processor, which leads to a reduction in execution time. Hence, it allows the analysis of bigger subject and/or atlas datasets. As a result, the segmentation of a subject of 4,145,000 fibers is reduced from about 14 minutes in the previous version to about 6 minutes, yielding an acceleration of 2.34. In addition, the new algorithm reduces the memory consumption of the previous version by a factor of 0.79.Comment: This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie Actions H2020-MSCA-RISE-2015 BIRDS GA No. 690941, CONICYT PFCHA/ DOCTORADO NACIONAL/2016-21160342, CONICYT FONDECYT 1161427, CONICYT PIA/Anillo de Investigaci\'on en Ciencia y Tecnolog\'ia ACT172121, CONICYT BASAL FB0008 and from CONICYT Basal FB000

    Improving the Tractography Pipeline: on Evaluation, Segmentation, and Visualization

    Get PDF
    Recent advances in tractography allow for connectomes to be constructed in vivo. These have applications for example in brain tumor surgery and understanding of brain development and diseases. The large size of the data produced by these methods lead to a variety problems, including how to evaluate tractography outputs, development of faster processing algorithms for tractography and clustering, and the development of advanced visualization methods for verification and exploration. This thesis presents several advances in these fields. First, an evaluation is presented for the robustness to noise of multiple commonly used tractography algorithms. It employs a Monte–Carlo simulation of measurement noise on a constructed ground truth dataset. As a result of this evaluation, evidence for obustness of global tractography is found, and algorithmic sources of uncertainty are identified. The second contribution is a fast clustering algorithm for tractography data based on k–means and vector fields for representing the flow of each cluster. It is demonstrated that this algorithm can handle large tractography datasets due to its linear time and memory complexity, and that it can effectively integrate interrupted fibers that would be rejected as outliers by other algorithms. Furthermore, a visualization for the exploration of structural connectomes is presented. It uses illustrative rendering techniques for efficient presentation of connecting fiber bundles in context in anatomical space. Visual hints are employed to improve the perception of spatial relations. Finally, a visualization method with application to exploration and verification of probabilistic tractography is presented, which improves on the previously presented Fiber Stippling technique. It is demonstrated that the method is able to show multiple overlapping tracts in context, and correctly present crossing fiber configurations

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    Diffusion MRI tractography for oncological neurosurgery planning:Clinical research prototype

    Get PDF

    Unsupervised deep learning of human brain diffusion magnetic resonance imaging tractography data

    Get PDF
    L'imagerie par résonance magnétique de diffusion est une technique non invasive permettant de connaître la microstructure organisationnelle des tissus biologiques. Les méthodes computationnelles qui exploitent la préférence orientationnelle de la diffusion dans des structures restreintes pour révéler les voies axonales de la matière blanche du cerveau sont appelées tractographie. Ces dernières années, diverses méthodes de tractographie ont été utilisées avec succès pour découvrir l'architecture de la matière blanche du cerveau. Pourtant, ces techniques de reconstruction souffrent d'un certain nombre de défauts dérivés d'ambiguïtés fondamentales liées à l'information orientationnelle. Cela a des conséquences dramatiques, puisque les cartes de connectivité de la matière blanche basées sur la tractographie sont dominées par des faux positifs. Ainsi, la grande proportion de voies invalides récupérées demeure un des principaux défis à résoudre par la tractographie pour obtenir une description anatomique fiable de la matière blanche. Des approches méthodologiques innovantes sont nécessaires pour aider à résoudre ces questions. Les progrès récents en termes de puissance de calcul et de disponibilité des données ont rendu possible l'application réussie des approches modernes d'apprentissage automatique à une variété de problèmes, y compris les tâches de vision par ordinateur et d'analyse d'images. Ces méthodes modélisent et trouvent les motifs sous-jacents dans les données, et permettent de faire des prédictions sur de nouvelles données. De même, elles peuvent permettre d'obtenir des représentations compactes des caractéristiques intrinsèques des données d'intérêt. Les approches modernes basées sur les données, regroupées sous la famille des méthodes d'apprentissage profond, sont adoptées pour résoudre des tâches d'analyse de données d'imagerie médicale, y compris la tractographie. Dans ce contexte, les méthodes deviennent moins dépendantes des contraintes imposées par les approches classiques utilisées en tractographie. Par conséquent, les méthodes inspirées de l'apprentissage profond conviennent au changement de paradigme requis, et peuvent ouvrir de nouvelles possibilités de modélisation, en améliorant ainsi l'état de l'art en tractographie. Dans cette thèse, un nouveau paradigme basé sur les techniques d'apprentissage de représentation est proposé pour générer et analyser des données de tractographie. En exploitant les architectures d'autoencodeurs, ce travail tente d'explorer leur capacité à trouver un code optimal pour représenter les caractéristiques des fibres de la matière blanche. Les contributions proposées exploitent ces représentations pour une variété de tâches liées à la tractographie, y compris (i) le filtrage et (ii) le regroupement efficace sur les résultats générés par d'autres méthodes, ainsi que (iii) la reconstruction proprement dite des fibres de la matière blanche en utilisant une méthode générative. Ainsi, les méthodes issues de cette thèse ont été nommées (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), et (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectivement. Les performances des méthodes proposées sont évaluées par rapport aux méthodes de l'état de l'art sur des données de diffusion synthétiques et des données de cerveaux humains chez l'adulte sain in vivo. Les résultats montrent que (i) la méthode de filtrage proposée offre une sensibilité et spécificité supérieures par rapport à d'autres méthodes de l'état de l'art; (ii) le regroupement des tractes dans des faisceaux est fait de manière consistante; et (iii) l'approche générative échantillonnant des tractes comble mieux l'espace de la matière blanche dans des régions difficiles à reconstruire. Enfin, cette thèse révèle les possibilités des autoencodeurs pour l'analyse des données des fibres de la matière blanche, et ouvre la voie à fournir des données de tractographie plus fiables.Abstract : Diffusion magnetic resonance imaging is a non-invasive technique providing insights into the organizational microstructure of biological tissues. The computational methods that exploit the orientational preference of the diffusion in restricted structures to reveal the brain's white matter axonal pathways are called tractography. In recent years, a variety of tractography methods have been successfully used to uncover the brain's white matter architecture. Yet, these reconstruction techniques suffer from a number of shortcomings derived from fundamental ambiguities inherent to the orientation information. This has dramatic consequences, since current tractography-based white matter connectivity maps are dominated by false positive connections. Thus, the large proportion of invalid pathways recovered remains one of the main challenges to be solved by tractography to obtain a reliable anatomical description of the white matter. Methodological innovative approaches are required to help solving these questions. Recent advances in computational power and data availability have made it possible to successfully apply modern machine learning approaches to a variety of problems, including computer vision and image analysis tasks. These methods model and learn the underlying patterns in the data, and allow making accurate predictions on new data. Similarly, they may enable to obtain compact representations of the intrinsic features of the data of interest. Modern data-driven approaches, grouped under the family of deep learning methods, are being adopted to solve medical imaging data analysis tasks, including tractography. In this context, the proposed methods are less dependent on the constraints imposed by current tractography approaches. Hence, deep learning-inspired methods are suit for the required paradigm shift, may open new modeling possibilities, and thus improve the state of the art in tractography. In this thesis, a new paradigm based on representation learning techniques is proposed to generate and to analyze tractography data. By harnessing autoencoder architectures, this work explores their ability to find an optimal code to represent the features of the white matter fiber pathways. The contributions exploit such representations for a variety of tractography-related tasks, including efficient (i) filtering and (ii) clustering on results generated by other methods, and (iii) the white matter pathway reconstruction itself using a generative method. The methods issued from this thesis have been named (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), and (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectively. The proposed methods' performance is assessed against current state-of-the-art methods on synthetic data and healthy adult human brain in vivo data. Results show that the (i) introduced filtering method has superior sensitivity and specificity over other state-of-the-art methods; (ii) the clustering method groups streamlines into anatomically coherent bundles with a high degree of consistency; and (iii) the generative streamline sampling technique successfully improves the white matter coverage in hard-to-track bundles. In summary, this thesis unlocks the potential of deep autoencoder-based models for white matter data analysis, and paves the way towards delivering more reliable tractography data

    Physical and digital phantoms for validating tractography and assessing artifacts

    Get PDF
    Fiber tractography is widely used to non-invasively map white-matter bundles in vivo using diffusion-weighted magnetic resonance imaging (dMRI). As it is the case for all scientific methods, proper validation is a key prerequisite for the successful application of fiber tractography, be it in the area of basic neuroscience or in a clinical setting. It is well-known that the indirect estimation of the fiber tracts from the local diffusion signal is highly ambiguous and extremely challenging. Furthermore, the validation of fiber tractography methods is hampered by the lack of a real ground truth, which is caused by the extremely complex brain microstructure that is not directly observable non-invasively and that is the basis of the huge network of long-range fiber connections in the brain that are the actual target of fiber tractography methods. As a substitute for in vivo data with a real ground truth that could be used for validation, a widely and successfully employed approach is the use of synthetic phantoms. In this work, we are providing an overview of the state-of-the-art in the area of physical and digital phantoms, answering the following guiding questions: “What are dMRI phantoms and what are they good for?”, “What would the ideal phantom for validation fiber tractography look like?” and “What phantoms, phantom datasets and tools used for their creation are available to the research community?”. We will further discuss the limitations and opportunities that come with the use of dMRI phantoms, and what future direction this field of research might take

    Density-based algorithms for active and anytime clustering

    Get PDF
    Data intensive applications like biology, medicine, and neuroscience require effective and efficient data mining technologies. Advanced data acquisition methods produce a constantly increasing volume and complexity. As a consequence, the need of new data mining technologies to deal with complex data has emerged during the last decades. In this thesis, we focus on the data mining task of clustering in which objects are separated in different groups (clusters) such that objects inside a cluster are more similar than objects in different clusters. Particularly, we consider density-based clustering algorithms and their applications in biomedicine. The core idea of the density-based clustering algorithm DBSCAN is that each object within a cluster must have a certain number of other objects inside its neighborhood. Compared with other clustering algorithms, DBSCAN has many attractive benefits, e.g., it can detect clusters with arbitrary shape and is robust to outliers, etc. Thus, DBSCAN has attracted a lot of research interest during the last decades with many extensions and applications. In the first part of this thesis, we aim at developing new algorithms based on the DBSCAN paradigm to deal with the new challenges of complex data, particularly expensive distance measures and incomplete availability of the distance matrix. Like many other clustering algorithms, DBSCAN suffers from poor performance when facing expensive distance measures for complex data. To tackle this problem, we propose a new algorithm based on the DBSCAN paradigm, called Anytime Density-based Clustering (A-DBSCAN), that works in an anytime scheme: in contrast to the original batch scheme of DBSCAN, the algorithm A-DBSCAN first produces a quick approximation of the clustering result and then continuously refines the result during the further run. Experts can interrupt the algorithm, examine the results, and choose between (1) stopping the algorithm at any time whenever they are satisfied with the result to save runtime and (2) continuing the algorithm to achieve better results. Such kind of anytime scheme has been proven in the literature as a very useful technique when dealing with time consuming problems. We also introduced an extended version of A-DBSCAN called A-DBSCAN-XS which is more efficient and effective than A-DBSCAN when dealing with expensive distance measures. Since DBSCAN relies on the cardinality of the neighborhood of objects, it requires the full distance matrix to perform. For complex data, these distances are usually expensive, time consuming or even impossible to acquire due to high cost, high time complexity, noisy and missing data, etc. Motivated by these potential difficulties of acquiring the distances among objects, we propose another approach for DBSCAN, called Active Density-based Clustering (Act-DBSCAN). Given a budget limitation B, Act-DBSCAN is only allowed to use up to B pairwise distances ideally to produce the same result as if it has the entire distance matrix at hand. The general idea of Act-DBSCAN is that it actively selects the most promising pairs of objects to calculate the distances between them and tries to approximate as much as possible the desired clustering result with each distance calculation. This scheme provides an efficient way to reduce the total cost needed to perform the clustering. Thus it limits the potential weakness of DBSCAN when dealing with the distance sparseness problem of complex data. As a fundamental data clustering algorithm, density-based clustering has many applications in diverse fields. In the second part of this thesis, we focus on an application of density-based clustering in neuroscience: the segmentation of the white matter fiber tracts in human brain acquired from Diffusion Tensor Imaging (DTI). We propose a model to evaluate the similarity between two fibers as a combination of structural similarity and connectivity-related similarity of fiber tracts. Various distance measure techniques from fields like time-sequence mining are adapted to calculate the structural similarity of fibers. Density-based clustering is used as the segmentation algorithm. We show how A-DBSCAN and A-DBSCAN-XS are used as novel solutions for the segmentation of massive fiber datasets and provide unique features to assist experts during the fiber segmentation process.Datenintensive Anwendungen wie Biologie, Medizin und Neurowissenschaften erfordern effektive und effiziente Data-Mining-Technologien. Erweiterte Methoden der Datenerfassung erzeugen stetig wachsende Datenmengen und Komplexit\"at. In den letzten Jahrzehnten hat sich daher ein Bedarf an neuen Data-Mining-Technologien f\"ur komplexe Daten ergeben. In dieser Arbeit konzentrieren wir uns auf die Data-Mining-Aufgabe des Clusterings, in der Objekte in verschiedenen Gruppen (Cluster) getrennt werden, so dass Objekte in einem Cluster untereinander viel \"ahnlicher sind als Objekte in verschiedenen Clustern. Insbesondere betrachten wir dichtebasierte Clustering-Algorithmen und ihre Anwendungen in der Biomedizin. Der Kerngedanke des dichtebasierten Clustering-Algorithmus DBSCAN ist, dass jedes Objekt in einem Cluster eine bestimmte Anzahl von anderen Objekten in seiner Nachbarschaft haben muss. Im Vergleich mit anderen Clustering-Algorithmen hat DBSCAN viele attraktive Vorteile, zum Beispiel kann es Cluster mit beliebiger Form erkennen und ist robust gegen\"uber Ausrei{\ss}ern. So hat DBSCAN in den letzten Jahrzehnten gro{\ss}es Forschungsinteresse mit vielen Erweiterungen und Anwendungen auf sich gezogen. Im ersten Teil dieser Arbeit wollen wir auf die Entwicklung neuer Algorithmen eingehen, die auf dem DBSCAN Paradigma basieren, um mit den neuen Herausforderungen der komplexen Daten, insbesondere teurer Abstandsma{\ss}e und unvollst\"andiger Verf\"ugbarkeit der Distanzmatrix umzugehen. Wie viele andere Clustering-Algorithmen leidet DBSCAN an schlechter Per- formanz, wenn es teuren Abstandsma{\ss}en f\"ur komplexe Daten gegen\"uber steht. Um dieses Problem zu l\"osen, schlagen wir einen neuen Algorithmus vor, der auf dem DBSCAN Paradigma basiert, genannt Anytime Density-based Clustering (A-DBSCAN), der mit einem Anytime Schema funktioniert. Im Gegensatz zu dem urspr\"unglichen Schema DBSCAN, erzeugt der Algorithmus A-DBSCAN zuerst eine schnelle Ann\"aherung des Clusterings-Ergebnisses und verfeinert dann kontinuierlich das Ergebnis im weiteren Verlauf. Experten k\"onnen den Algorithmus unterbrechen, die Ergebnisse pr\"ufen und w\"ahlen zwischen (1) Anhalten des Algorithmus zu jeder Zeit, wann immer sie mit dem Ergebnis zufrieden sind, um Laufzeit sparen und (2) Fortsetzen des Algorithmus, um bessere Ergebnisse zu erzielen. Eine solche Art eines "Anytime Schemas" ist in der Literatur als eine sehr n\"utzliche Technik erprobt, wenn zeitaufwendige Problemen anfallen. Wir stellen auch eine erweiterte Version von A-DBSCAN als A-DBSCAN-XS vor, die effizienter und effektiver als A-DBSCAN beim Umgang mit teuren Abstandsma{\ss}en ist. Da DBSCAN auf der Kardinalit\"at der Nachbarschaftsobjekte beruht, ist es notwendig, die volle Distanzmatrix auszurechen. F\"ur komplexe Daten sind diese Distanzen in der Regel teuer, zeitaufwendig oder sogar unm\"oglich zu errechnen, aufgrund der hohen Kosten, einer hohen Zeitkomplexit\"at oder verrauschten und fehlende Daten. Motiviert durch diese m\"oglichen Schwierigkeiten der Berechnung von Entfernungen zwischen Objekten, schlagen wir einen anderen Ansatz f\"ur DBSCAN vor, namentlich Active Density-based Clustering (Act-DBSCAN). Bei einer Budgetbegrenzung B, darf Act-DBSCAN nur bis zu B ideale paarweise Distanzen verwenden, um das gleiche Ergebnis zu produzieren, wie wenn es die gesamte Distanzmatrix zur Hand h\"atte. Die allgemeine Idee von Act-DBSCAN ist, dass es aktiv die erfolgversprechendsten Paare von Objekten w\"ahlt, um die Abst\"ande zwischen ihnen zu berechnen, und versucht, sich so viel wie m\"oglich dem gew\"unschten Clustering mit jeder Abstandsberechnung zu n\"ahern. Dieses Schema bietet eine effiziente M\"oglichkeit, die Gesamtkosten der Durchf\"uhrung des Clusterings zu reduzieren. So schr\"ankt sie die potenzielle Schw\"ache des DBSCAN beim Umgang mit dem Distance Sparseness Problem von komplexen Daten ein. Als fundamentaler Clustering-Algorithmus, hat dichte-basiertes Clustering viele Anwendungen in den unterschiedlichen Bereichen. Im zweiten Teil dieser Arbeit konzentrieren wir uns auf eine Anwendung des dichte-basierten Clusterings in den Neurowissenschaften: Die Segmentierung der wei{\ss}en Substanz bei Faserbahnen im menschlichen Gehirn, die vom Diffusion Tensor Imaging (DTI) erfasst werden. Wir schlagen ein Modell vor, um die \"Ahnlichkeit zwischen zwei Fasern als einer Kombination von struktureller und konnektivit\"atsbezogener \"Ahnlichkeit von Faserbahnen zu beurteilen. Verschiedene Abstandsma{\ss}e aus Bereichen wie dem Time-Sequence Mining werden angepasst, um die strukturelle \"Ahnlichkeit von Fasern zu berechnen. Dichte-basiertes Clustering wird als Segmentierungsalgorithmus verwendet. Wir zeigen, wie A-DBSCAN und A-DBSCAN-XS als neuartige L\"osungen f\"ur die Segmentierung von sehr gro{\ss}en Faserdatens\"atzen verwendet werden, und bieten innovative Funktionen, um Experten w\"ahrend des Fasersegmentierungsprozesses zu unterst\"utzen

    Adaptive microstructure-informed tractography for accurate brain connectivity analyses

    Get PDF
    Human brain has been subject of deep interest for centuries, given it's central role in controlling and directing the actions and functions of the body as response to external stimuli. The neural tissue is primarily constituted of neurons and, together with dendrites and the nerve synapses, constitute the gray matter (GM) which plays a major role in cognitive functions. The information processed in the GM travel from one region to the other of the brain along nerve cell projections, called axons. All together they constitute the white matter (WM) whose wiring organization still remains challenging to uncover. The relationship between structure organization of the brain and function has been deeply investigated on humans and animals based on the assumption that the anatomic architecture determine the network dynamics. In response to that, many different imaging techniques raised, among which diffusion-weighted magnetic resonance imaging (DW-MRI) has triggered tremendous hopes and expectations. Diffusion-weighted imaging measures both restricted and unrestricted diffusion, i.e. the degree of movement freedom of the water molecules, allowing to map the tissue fiber architecture in vivo and non-invasively. Based on DW-MRI data, tractography is able to exploit information of the local fiber orientation to recover global fiber pathways, called streamlines, that represent groups of axons. This, in turn, allows to infer the WM structural connectivity, becoming widely used in many different clinical applications as for diagnoses, virtual dissections and surgical planning. However, despite this unique and compelling ability, data acquisition still suffers from technical limitations and recent studies have highlighted the poor anatomical accuracy of the reconstructions obtained with this technique and challenged its effectiveness for studying brain connectivity. The focus of this Ph.D. project is to specifically address these limitations and to improve the anatomical accuracy of the structural connectivity estimates. To this aim, we developed a global optimization algorithm that exploits micro and macro-structure information, introducing an iterative procedure that uses the underlying tissue properties to drive the reconstruction using a semi-global approach. Then, we investigated the possibility to dynamically adapt the position of a set of candidate streamlines while embedding the anatomical prior of trajectories smoothness and adapting the configuration based on the observed data. Finally, we introduced the concept of bundle-o-graphy by implementing a method to model groups of streamlines based on the concept that axons are organized into fascicles, adapting their shape and extent based on the underlying microstructure

    Tractographie adaptative basée sur la microstructure pour des analyses précises de la connectivité cérébrale

    Get PDF
    Le cerveau est un sujet de recherche depuis plusieurs décennies, puisque son rôle est central dans la compréhension du genre humain. Le cerveau est composé de neurones, où leurs dendrites et synapses se retrouvent dans la matière grise alors que les axones en constituent la matière blanche. L’information traitée dans les différentes régions de la matière grise est ensuite transmise par l’intermédiaire des axones afin d’accomplir différentes fonctions cognitives. La matière blanche forme une structure d’interconnections complexe encore dif- ficile à comprendre et à étudier. La relation entre l’architecture et la fonction du cerveau a été étudiée chez les humains ainsi que pour d’autres espèces, croyant que l’architecture des axones déterminait la dynamique du réseau fonctionnel. Dans ce même objectif, l’Imagerie par résonance (IRM) est un outil formidable qui nous permet de visualiser les tissus cérébraux de façon non-invasive. Plus partic- ulièrement, l’IRM de diffusion permet d’estimer et de séparer la diffusion libre de celle restreinte par la structure des tissus. Cette mesure de restriction peut être utilisée afin d’inférer l’orientation locale des faisceaux de matière blanche. L’algorithme de tractographie exploite cette carte d’orientation pour reconstruire plusieurs connexions de la matière blanche (nommées “streamlines”). Cette modélisation de la matière blanche permet d’estimer la connectivité cérébrale dite structurelle entre les différentes régions du cerveau. Ces résultats peuvent être employés directement pour la planification chirurgicale ou indirectement pour l’analyse ou une évaluation clinique. Malgré plusieurs de ses limitations, telles que sa variabilité et son imprécision, la tractographie reste l’unique moyen d’étudier l’architecture de la matière blanche ainsi que la connectivité cérébrale de façon non invasive. L’objectif de ce projet de doctorat est de répondre spécifiquement à ces limitations et d’améliorer la précision anatomique des estimations de connectivité structurelle. Dans ce but, nous avons développé un algorithme d’optimisation globale qui exploite les informations de micro et macrostructure, en introduisant une procédure itéra- tive qui utilise les propriétés sous-jacentes des tissus pour piloter la reconstruction en utilisant une approche semi-globale. Ensuite, nous avons étudié la possibilité d’adapter dynamiquement la position d’un ensemble de lignes de courant candidates tout en intégrant le préalable anatomique de la douceur des trajectoires et en adap- tant la configuration en fonction des données observées. Enfin, nous avons introduit le concept de bundle-o-graphy en mettant en œuvre une méthode pour modéliser des groupes de lignes de courant basées sur le concept que les axones sont organisés en fascicules, en adaptant leur forme et leur étendue en fonction de la microstructure sous-jacente.Abstract : Human brain has been subject of deep interest for centuries, given it’s central role in controlling and directing the actions and functions of the body as response to external stimuli. The neural tissue is primarily constituted of neurons and, together with dendrites and the nerve synapses, constitute the gray matter (GM) which plays a major role in cognitive functions. The information processed in the GM travel from one region to the other of the brain along nerve cell projections, called axons. All together they constitute the white matter (WM) whose wiring organization still remains challenging to uncover. The relationship between structure organization of the brain and function has been deeply investigated on humans and animals based on the assumption that the anatomic architecture determine the network dynamics. In response to that, many different imaging techniques raised, among which diffusion-weighted magnetic resonance imaging (DW-MRI) has triggered tremendous hopes and expectations. Diffusion-weighted imaging measures both restricted and unrestricted diffusion, i.e. the degree of movement freedom of the water molecules, allowing to map the tissue fiber architecture in vivo and non-invasively. Based on DW-MRI data, tractography is able to exploit information of the local fiber orien- tation to recover global fiber pathways, called streamlines, that represent groups of axons. This, in turn, allows to infer the WM structural connectivity, becoming widely used in many different clinical applications as for diagnoses, virtual dissections and surgical planning. However, despite this unique and compelling ability, data acqui- sition still suffers from technical limitations and recent studies have highlighted the poor anatomical accuracy of the reconstructions obtained with this technique and challenged its effectiveness for studying brain connectivity. The focus of this Ph.D. project is to specifically address these limitations and to improve the anatomical accuracy of the structural connectivity estimates. To this aim, we developed a global optimization algorithm that exploits micro and macro- structure information, introducing an iterative procedure that uses the underlying tissue properties to drive the reconstruction using a semi-global approach. Then, we investigated the possibility to dynamically adapt the position of a set of candidate streamlines while embedding the anatomical prior of trajectories smoothness and adapting the configuration based on the observed data. Finally, we introduced the concept of bundle-o-graphy by implementing a method to model groups of streamlines based on the concept that axons are organized into fascicles, adapting their shape and extent based on the underlying microstructure.Sommario : Il cervello umano è oggetto di profondo interesse da secoli, dato il suo ruolo centrale nel controllare e dirigere le azioni e le funzioni del corpo in risposta a stimoli esterno. Il tessuto neurale è costituito principalmente da neuroni che, insieme ai dendriti e alle sinapsi nervose, costituiscono la materia grigia (GM), la quale riveste un ruolo centrale nelle funzioni cognitive. Le informazioni processate nella GM viaggiano da una regione all’altra del cervello lungo estensioni delle cellule nervose, chiamate assoni. Tutti insieme costituiscono la materia bianca (WM) la cui organizzazione strutturale rimane tuttora sconosciuta. Il legame tra struttura e funzione del cervello sono stati studiati a fondo su esseri umani e animali partendo dal presupposto che l’architettura anatomica determini la dinamica della rete funzionale. In risposta a ciò, sono emerse diverse tecniche di imaging, tra cui la risonanza magnetica pesata per diffusione (DW-MRI) ha suscitato enormi speranze e aspettative. Questa tecnica misura la diffusione sia libera che ristretta, ovvero il grado di libertà di movimento delle molecole d’acqua, consentendo di mappare l’architettura delle fibre neuronali in vivo e in maniera non invasiva. Basata su dati DW-MRI, la trattografia è in grado di sfruttare le informazioni sull’orientamento locale delle fibre per ricostruirne i percorsi a livello globale. Questo, a sua volta, consente di estrarre la connettività strutturale della WM, utilizzata in diverse applicazioni cliniche come per diagnosi, dissezioni virtuali e pianificazione chirurgica. Tuttavia, nonostante questa capacità unica e promettente, l’acquisizione dei dati soffre ancora di limitazioni tecniche e recenti studi hanno messo in evidenza la scarsa accuratezza anatomica delle ricostruzioni ottenute con questa tecnica, mettendone in dubbio l’efficacia per lo studio della connettività cerebrale. Il focus di questo progetto di dottorato è quello di affrontare in modo specifico queste limitazioni e di migliorare l’accuratezza anatomica delle stime di connettività strutturale. A tal fine, abbiamo sviluppato un algoritmo di ottimizzazione globale che sfrutta le informazioni sia micro che macrostrutturali, introducendo una procedura iterativa che utilizza le proprietà del tessuto neuronale per guidare la ricostruzione utilizzando un approccio semi-globale. Successivamente, abbiamo studiato la possibilità di adattare dinamicamente la posizione di un insieme di streamline candidate incorporando il prior anatomico per cui devono seguire traiettorie regolari e adattando la configurazione in base ai dati osservati. Infine, abbiamo introdotto il concetto di bundle-o-graphy implementando un metodo per modellare gruppi di streamline basato sul concetto che gli assoni sono organizzati in fasci, adattando la loro forma ed estensione in base alla microstruttura sottostante
    • …
    corecore