25 research outputs found

    Watershed-based Segmentation of the Midsagittal Section of the Corpus Callosum in Diffusion MRI

    Get PDF
    Abstract -The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres. The CC is related to several diseases including dyslexia, autism, multiple sclerosis and lupus, which make its study even more important. We propose here a new approach for fully automatic segmentation of the midsagittal section of CC in magnetic resonance diffusion tensor images, including the automatic determination of the midsagittal slice of the brain . It uses the watershed transform and is performed on the fractional anisotropy map weighted by the projection of the principal eigenvector in the left-right direction. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC and to the determinate the midsagittal slice without any user intervention. Since it is simple, fast a nd does not require parameter settings, the proposed method is well suited for clinical applications

    Interactive Segmentation and Visualization of DTI Data Using a Hierarchical Watershed Representation

    Get PDF
    Magnetic resonance diffusion tensor imaging (DTI) measures diffusion of water molecules and is used to characterize orientation of white matter fibers and connectivity of neurological structures. Segmentation and visualization of DT images is challenging, because of low data quality and complexity of anatomical structures. In this paper, we propose an interactive segmentation approach, based on a hierarchical representation of the input DT image through a tree structure. The tree is obtained by successively merging watershed regions, based on the morphological waterfall approach, hence the name watershed tree. Region merging is done according to a combined similarity and homogeneity criterion. We introduce filters that work on the proposed tree representation, and that enable region-based attribute filtering of DTI data. Linked views between the visualizations of the simplified DT image and the tree enable a user to visually explore both data and tree at interactive rates. The coupling of filtering, semiautomatic segmentation by labeling nodes in the tree, and various interaction mechanisms support the segmentation task. Our method is robust against noise, which we demonstrate on synthetic and real DTI data

    A graph-based mathematical morphology reader

    Full text link
    This survey paper aims at providing a "literary" anthology of mathematical morphology on graphs. It describes in the English language many ideas stemming from a large number of different papers, hence providing a unified view of an active and diverse field of research

    Homogeneity based segmentation and enhancement of Diffusion Tensor Images : a white matter processing framework

    Get PDF
    In diffusion magnetic resonance imaging (DMRI) the Brownian motion of the water molecules, within biological tissue, is measured through a series of images. In diffusion tensor imaging (DTI) this diffusion is represented using tensors. DTI describes, in a non-invasive way, the local anisotropy pattern enabling the reconstruction of the nervous fibers - dubbed tractography. DMRI constitutes a powerful tool to analyse the structure of the white matter within a voxel, but also to investigate the anatomy of the brain and its connectivity. DMRI has been proved useful to characterize brain disorders, to analyse the differences on white matter and consequences in brain function. These procedures usually involve the virtual dissection of white matters tracts of interest. The manual isolation of these bundles requires a great deal of neuroanatomical knowledge and can take up to several hours of work. This thesis focuses on the development of techniques able to automatically perform the identification of white matter structures. To segment such structures in a tensor field, the similarity of diffusion tensors must be assessed for partitioning data into regions, which are homogeneous in terms of tensor characteristics. This concept of tensor homogeneity is explored in order to achieve new methods for segmenting, filtering and enhancing diffusion images. First, this thesis presents a novel approach to semi-automatically define the similarity measures that better suit the data. Following, a multi-resolution watershed framework is presented, where the tensor field’s homogeneity is used to automatically achieve a hierarchical representation of white matter structures in the brain, allowing the simultaneous segmentation of different structures with different sizes. The stochastic process of water diffusion within tissues can be modeled, inferring the homogeneity characteristics of the diffusion field. This thesis presents an accelerated convolution method of diffusion images, where these models enable the contextual processing of diffusion images for noise reduction, regularization and enhancement of structures. These new methods are analysed and compared on the basis of their accuracy, robustness, speed and usability - key points for their application in a clinical setting. The described methods enrich the visualization and exploration of white matter structures, fostering the understanding of the human brain

    Gap Filling of 3-D Microvascular Networks by Tensor Voting

    Get PDF
    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated

    Robust perceptual organization techniques for analysis of color images

    Get PDF
    Esta tesis aborda el desarrollo de nuevas técnicas de análisis robusto de imágenes estrechamente relacionadas con el comportamiento del sistema visual humano. Uno de los pilares de la tesis es la votación tensorial, una técnica robusta que propaga y agrega información codificada en tensores mediante un proceso similar a la convolución. Su robustez y adaptabilidad han sido claves para su uso en esta tesis. Ambas propiedades han sido verificadas en tres nuevas aplicaciones de la votación tensorial: estimación de estructura, detección de bordes y segmentación de imágenes adquiridas mediante estereovisión.El mayor problema de la votación tensorial es su elevado coste computacional. En esta línea, esta tesis propone dos nuevas implementaciones eficientes de la votación tensorial derivadas de un análisis en profundidad de esta técnica.A pesar de su capacidad de adaptación, esta tesis muestra que la formulación original de la votación tensorial (a partir de aquí, votación tensorial clásica) no es adecuada para algunas aplicaciones, dado que las hipótesis en las que se basa no se ajustan a todas ellas. Esto ocurre particularmente en el filtrado de imágenes en color. Así, esta tesis muestra que, más que un método, la votación tensorial es una metodología en la que la codificación y el proceso de votación pueden ser adaptados específicamente para cada aplicación, manteniendo el espíritu de la votación tensorial.En esta línea, esta tesis propone un marco unificado en el que se realiza a la vez el filtrado de imágenes y la detección robusta de bordes. Este marco de trabajo es una extensión de la votación tensorial clásica en la que el color y la probabilidad de encontrar un borde en cada píxel se codifican mediante tensores, y en el que el proceso de votación se basa en un conjunto de criterios perceptuales relacionados con el modo en que el sistema visual humano procesa información. Los avances recientes en la percepción del color han sido esenciales en el diseño de dicho proceso de votación.Este nuevo enfoque ha sido efectivo, obteniendo excelentes resultados en ambas aplicaciones. En concreto, el nuevo método aplicado al filtrado de imágenes tiene un mejor rendimiento que los métodos del estado del arte para ruido real. Esto lo hace más adecuado para aplicaciones reales, donde los algoritmos de filtrado son imprescindibles. Además, el método aplicado a detección de bordes produce resultados más robustos que las técnicas del estado del arte y tiene un rendimiento competitivo con relación a la completitud, discriminabilidad, precisión y rechazo de falsas alarmas.Además, esta tesis demuestra que este nuevo marco de trabajo puede combinarse con otras técnicas para resolver el problema de segmentación robusta de imágenes. Los tensores obtenidos mediante el nuevo método se utilizan para clasificar píxeles como probablemente homogéneos o no homogéneos. Ambos tipos de píxeles se segmentan a continuación por medio de una variante de un algoritmo eficiente de segmentación de imágenes basada en grafos. Los experimentos muestran que el algoritmo propuesto obtiene mejores resultados en tres de las cinco métricas de evaluación aplicadas en comparación con las técnicas del estado del arte, con un coste computacional competitivo.La tesis también propone nuevas técnicas de evaluación en el ámbito del procesamiento de imágenes. En concreto, se proponen dos métricas de filtrado de imágenes con el fin de medir el grado en que un método es capaz de preservar los bordes y evitar la introducción de defectos. Asimismo, se propone una nueva metodología para la evaluación de detectores de bordes que evita posibles sesgos introducidos por el post-procesado. Esta metodología se basa en cinco métricas para estimar completitud, discriminabilidad, precisión, rechazo de falsas alarmas y robustez. Por último, se proponen dos nuevas métricas no paramétricas para estimar el grado de sobre e infrasegmentación producido por los algoritmos de segmentación de imágenes.This thesis focuses on the development of new robust image analysis techniques more closely related to the way the human visual system behaves. One of the pillars of the thesis is the so called tensor voting technique. This is a robust perceptual organization technique that propagates and aggregates information encoded by means of tensors through a convolution like process. Its robustness and adaptability have been one of the key points for using tensor voting in this thesis. These two properties are verified in the thesis by applying tensor voting to three applications where it had not been applied so far: image structure estimation, edge detection and image segmentation of images acquired through stereo vision.The most important drawback of tensor voting is that its usual implementations are highly time consuming. In this line, this thesis proposes two new efficient implementations of tensor voting, both derived from an in depth analysis of this technique.Despite its adaptability, this thesis shows that the original formulation of tensor voting (hereafter, classical tensor voting) is not adequate for some applications, since the hypotheses from which it is based are not suitable for all applications. This is particularly certain for color image denoising. Thus, this thesis shows that, more than a method, tensor voting can be thought of as a methodology in which the encoding and voting process can be tailored for every specific application, while maintaining the tensor voting spirit.By following this reasoning, this thesis proposes a unified framework for both image denoising and robust edge detection.This framework is an extension of the classical tensor voting in which both color and edginess the likelihood of finding an edge at every pixel of the image are encoded through tensors, and where the voting process takes into account a set of plausible perceptual criteria related to the way the human visual system processes visual information. Recent advances in the perception of color have been essential for designing such a voting process.This new approach has been found effective, since it yields excellent results for both applications. In particular, the new method applied to image denoising has a better performance than other state of the art methods for real noise. This makes it more adequate for real applications, in which an image denoiser is indeed required. In addition, the method applied to edge detection yields more robust results than the state of the art techniques and has a competitive performance in recall, discriminability, precision, and false alarm rejection.Moreover, this thesis shows how the results of this new framework can be combined with other techniques to tackle the problem of robust color image segmentation. The tensors obtained by applying the new framework are utilized to classify pixels into likely homogeneous and likely inhomogeneous. Those pixels are then sequentially segmented through a variation of an efficient graph based image segmentation algorithm. Experiments show that the proposed segmentation algorithm yields better scores in three of the five applied evaluation metrics when compared to the state of the art techniques with a competitive computational cost.This thesis also proposes new evaluation techniques in the scope of image processing. First, two new metrics are proposed in the field of image denoising: one to measure how an algorithm is able to preserve edges, and the second to measure how a method is able not to introduce undesirable artifacts. Second, a new methodology for assessing edge detectors that avoids possible bias introduced by post processing is proposed. It consists of five new metrics for assessing recall, discriminability, precision, false alarm rejection and robustness. Finally, two new non parametric metrics are proposed for estimating the degree of over and undersegmentation yielded by image segmentation algorithms

    Proceedings of the Third International Workshop on Mathematical Foundations of Computational Anatomy - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceComputational anatomy is an emerging discipline at the interface of geometry, statistics and image analysis which aims at modeling and analyzing the biological shape of tissues and organs. The goal is to estimate representative organ anatomies across diseases, populations, species or ages, to model the organ development across time (growth or aging), to establish their variability, and to correlate this variability information with other functional, genetic or structural information. The Mathematical Foundations of Computational Anatomy (MFCA) workshop aims at fostering the interactions between the mathematical community around shapes and the MICCAI community in view of computational anatomy applications. It targets more particularly researchers investigating the combination of statistical and geometrical aspects in the modeling of the variability of biological shapes. The workshop is a forum for the exchange of the theoretical ideas and aims at being a source of inspiration for new methodological developments in computational anatomy. A special emphasis is put on theoretical developments, applications and results being welcomed as illustrations. Following the successful rst edition of this workshop in 20061 and second edition in New-York in 20082, the third edition was held in Toronto on September 22 20113. Contributions were solicited in Riemannian and group theoretical methods, geometric measurements of the anatomy, advanced statistics on deformations and shapes, metrics for computational anatomy, statistics of surfaces, modeling of growth and longitudinal shape changes. 22 submissions were reviewed by three members of the program committee. To guaranty a high level program, 11 papers only were selected for oral presentation in 4 sessions. Two of these sessions regroups classical themes of the workshop: statistics on manifolds and diff eomorphisms for surface or longitudinal registration. One session gathers papers exploring new mathematical structures beyond Riemannian geometry while the last oral session deals with the emerging theme of statistics on graphs and trees. Finally, a poster session of 5 papers addresses more application oriented works on computational anatomy

    2D Segmentation evaluation method of the corpus callosum in diffusion MRI using corpus callosum signature

    Get PDF
    Orientador: Leticia RittnerDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O corpo caloso (CC) é a maior estrutura de substância branca no cérebro, está localizada sob o córtex cerebral e conecta os dois hemisférios cerebrais, servindo de ponte de comunicação entre eles. É, portanto, uma estrutura de grande interesse no âmbito médico e de pesquisa. Sua forma e tamanho estão associadas com algumas características do sujeito e alterações na sua estrutura apresentam correlação com várias doenças e condições médicas. Nas imagens de difusão por ressonância magnética, a segmentação desta estrutura é importante já que a informação contida neste tipo de imagem permite estudar a microestrutura das fibras neuronais e os tecidos usando o modelo de difusão da água. Na literatura existem poucos métodos de segmentação do CC baseados em imagens de difusão por ressonância magnética e não existem estudos sobre avaliação quantitativa de segmentações neste espaço. A avaliação de segmentações em difusão é feita normalmente usando um padrão-ouro obtido manualmente sobre imagens de ressonância ponderadas em T1 e registrado no espaço de difusão. Porém, o registro é computacionalmente custoso e introduz erros no padrão final. Outros padrões podem ser construídos diretamente no espaço de difusão, porém também apresentam desvantagens. A avaliação quantitativa, neste caso, é feita usando uma métrica de sobreposição. Com o propósito de melhorar o esquema de avaliação usual por sobreposição, neste trabalho é proposto um método de avaliação que permite usar diretamente o padrão obtido manualmente em T1 sem ser necessário realizar o registro para o espaço de difusão. Este método está baseado no perfil de curvatura, um descritor que permite comparar segmentações através da forma, sem necessidade de sobreposição ou registro de imagens. O método proposto foi usado para avaliar segmentações em difusão obtidas através de três métodos distintos, em 145 sujeitos. A raiz do erro médio quadrático (RMSE), calculado a partir da comparação entre os perfis de curvatura, mostrou-se uma métrica complementaria ao coeficiente Dice e apresentou capacidade para discriminar segmentações. Para exploração em trabalhos futuros, o perfil de curvatura pode ser usado para identificação automática de segmentações incorretas em grandes bases de dados, estudos populacionais e longitudinais e caracterização de outras formas e estruturasAbstract: Corpus callosum (CC) is the greatest white matter structure in brain. It is located beneath the cortex and connects both of two hemispheres, making possible their communication. Therefore, CC is important in medical and academic scene. CC's shape and size are associated with some subject's characteristics and alterations in its structure have correlation with some diseases and medical conditions. Diffusion MRI makes possible study of the neuronal fibers and tissues micro structure using the water diffusion model. In the literature, there are few CC segmentation methods based-on Diffusion MRI and there are not studies related with segmentation quantitative evaluation in this model. Segmentation evaluation in diffusion is commonly performed using registered gold-standard or any standard draw directly in this modality. However, both of the two standards have problems because of the registering process itself, that introduce error in the final standard, or the drawing process on the diffusion images. Quantitative evaluation is done using overlap metric. In this work, an evaluation method is proposed making possible direct use of gold-standard without any register process. This methodology is based on CC signature, a descriptor for comparing segmentations using shape, without overlap requirement with standard. The segmentation evaluation method proposed was used in diffusion for quantitative assessment of 145 subjects using the gold-standard through CC signature. The RMSE metric, based on CC signature, showed to be complementary with Dice coefficient and capability for differentiating segmentations. For future work, the CC signature, used as characterization tool of the CC shape, would make possible automatic identification of incorrect segmentations in large databases, longitudinal studies, classification of populations and characterization of other structuresMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica190557/2014-1CNP

    Towards Individualized Transcranial Electric Stimulation Therapy through Computer Simulation

    Get PDF
    Transkranielle Elektrostimulation (tES) beschreibt eine Gruppe von Hirnstimulationstechniken, die einen schwachen elektrischen Strom über zwei nicht-invasiv am Kopf angebrachten Elektroden applizieren. Handelt es sich dabei um einen Gleichstrom, spricht man von transkranieller Gleichstromstimulation, auch tDCS abgekürzt. Die allgemeine Zielstellung aller Hirnstimulationstechniken ist Hirnfunktion durch ein Verstärken oder Dämpfen von Hirnaktivität zu beeinflussen. Unter den Stimulationstechniken wird die transkranielle Gleichstromstimulation als ein adjuvantes Werkzeug zur Unterstützung der mikroskopischen Reorganisation des Gehirnes in Folge von Lernprozessen und besonders der Rehabilitationstherapie nach einem Schlaganfall untersucht. Aktuelle Herausforderungen dieser Forschung sind eine hohe Variabilität im erreichten Stimulationseffekt zwischen den Probanden sowie ein unvollständiges Verständnis des Zusammenspiels der der Stimulation zugrundeliegenden Mechanismen. Als Schlüsselkomponente für das Verständnis der Stimulationsmechanismen wird das zwischen den Elektroden im Kopf des Probanden aufgebaute elektrische Feld erachtet. Einem grundlegenden Konzept folgend wird angenommen, dass Hirnareale, die einer größeren elektrischen Feldstärke ausgesetzt sind, ebenso einen höheren Stimulationseffekt erfahren. Damit kommt der Positionierung der Elektroden eine entscheidende Rolle für die Stimulation zu. Allerdings verteilt sich das elektrische Feld wegen des heterogenen elektrischen Leitfähigkeitsprofil des menschlichen Kopfes nicht uniform im Gehirn der Probanden. Außerdem ist das Verteilungsmuster auf Grund anatomischer Unterschiede zwischen den Probanden verschieden. Die triviale Abschätzung der Ausbreitung des elektrischen Feldes anhand der bloßen Position der Stimulationselektroden ist daher nicht ausreichend genau für eine zielgerichtete Stimulation. Computerbasierte, biophysikalische Simulationen der transkraniellen Elektrostimulation ermöglichen die individuelle Approximation des Verteilungsmusters des elektrischen Feldes in Probanden basierend auf deren medizinischen Bildgebungsdaten. Sie werden daher zunehmend verwendet, um tDCS-Anwendungen zu planen und verifizieren, und stellen ein wesentliches Hilfswerkzeug auf dem Weg zu individualisierter Schlaganfall-Rehabilitationstherapie dar. Softwaresysteme, die den dahinterstehenden individualisierten Verarbeitungsprozess erleichtern und für ein breites Feld an Forschern zugänglich machen, wurden in den vergangenen Jahren für den Anwendungsfall in gesunden Erwachsenen entwickelt. Jedoch bleibt die Simulation von Patienten mit krankhaftem Hirngewebe und strukturzerstörenden Läsionen eine nicht-triviale Aufgabe. Daher befasst sich das hier vorgestellte Projekt mit dem Aufbau und der praktischen Anwendung eines Arbeitsablaufes zur Simulation transkranieller Elektrostimulation. Dabei stand die Anforderung im Vordergrund medizinische Bildgebungsdaten insbesondere neurologischer Patienten mit krankhaft verändertem Hirngewebe verarbeiten zu können. Der grundlegende Arbeitsablauf zur Simulation wurde zunächst für gesunde Erwachsene entworfen und validiert. Dies umfasste die Zusammenstellung medizinischer Bildverarbeitungsalgorithmen zu einer umfangreichen Verarbeitungskette, um elektrisch relevante Strukturen in den Magnetresonanztomographiebildern des Kopfes und des Oberkörpers der Probanden zu identifizieren und zu extrahieren. Die identifizierten Strukturen mussten in Computermodelle überführt werden und das zugrundeliegende, physikalische Problem der elektrischen Volumenleitung in biologischen Geweben mit Hilfe numerischer Simulation gelöst werden. Im Verlauf des normalen Alterns ist das Gehirn strukturellen Veränderungen unterworfen, unter denen ein Verlust des Hirnvolumens sowie die Ausbildung mikroskopischer Veränderungen seiner Nervenfaserstruktur die Bedeutendsten sind. In einem zweiten Schritt wurde der Arbeitsablauf daher erweitert, um diese Phänomene des normalen Alterns zu berücksichtigen. Die vordergründige Herausforderung in diesem Teilprojekt war die biophysikalische Modellierung der veränderten Hirnmikrostruktur, da die resultierenden Veränderungen im Leitfähigkeitsprofil des Gehirns bisher noch nicht in der Literatur quantifiziert wurden. Die Erweiterung des Simulationsablauf zeichnete sich vorrangig dadurch aus, dass mit unsicheren elektrischen Leitfähigkeitswerten gearbeitet werden konnte. Damit war es möglich den Einfluss der ungenau bestimmbaren elektrischen Leitfähigkeit der verschiedenen biologischen Strukturen des menschlichen Kopfes auf das elektrische Feld zu ermitteln. In einer Simulationsstudie, in der Bilddaten von 88 Probanden einflossen, wurde die Auswirkung der veränderten Hirnfaserstruktur auf das elektrische Feld dann systematisch untersucht. Es wurde festgestellt, dass sich diese Gewebsveränderungen hochgradig lokal und im Allgemeinen gering auswirken. Schließlich wurden in einem dritten Schritt Simulationen für Schlaganfallpatienten durchgeführt. Ihre großen, strukturzerstörenden Läsionen wurden dabei mit einem höheren Detailgrad als in bisherigen Arbeiten modelliert und physikalisch abermals mit unsicheren Leitfähigkeiten gearbeitet, was zu unsicheren elektrischen Feldabschätzungen führte. Es wurden individuell berechnete elektrische Felddaten mit der Hirnaktivierung von 18 Patienten in Verbindung gesetzt, unter Berücksichtigung der inhärenten Unsicherheit in der Bestimmung der elektrischen Felder. Das Ziel war zu ergründen, ob die Hirnstimulation einen positiven Einfluss auf die Hirnaktivität der Patienten im Kontext von Rehabilitationstherapie ausüben und so die Neuorganisierung des Gehirns nach einem Schlaganfall unterstützen kann. Während ein schwacher Zusammenhang hergestellt werden konnte, sind weitere Untersuchungen nötig, um diese Frage abschließend zu klären.:Kurzfassung Abstract Contents 1 Overview 2 Anatomical structures in magnetic resonance images 2 Anatomical structures in magnetic resonance images 2.1 Neuroanatomy 2.2 Magnetic resonance imaging 2.3 Segmentation of MR images 2.4 Image morphology 2.5 Summary 3 Magnetic resonance image processing pipeline 3.1 Introduction to human body modeling 3.2 Description of the processing pipeline 3.3 Intermediate and final outcomes in two subjects 3.4 Discussion, limitations & future work 3.5 Conclusion 4 Numerical simulation of transcranial electric stimulation 4.1 Electrostatic foundations 4.2 Discretization of electrostatic quantities 4.3 The numeric solution process 4.4 Spatial discretization by volume meshing 4.5 Summary 5 Simulation workflow 5.1 Overview of tES simulation pipelines 5.2 My implementation of a tES simulation workflow 5.3 Verification & application examples 5.4 Discussion & Conclusion 6 Transcranial direct current stimulation in the aging brain 6.1 Handling age-related brain changes in tES simulations 6.2 Procedure of the simulation study 6.3 Results of the uncertainty analysis 6.4 Findings, limitations and discussion 7 Transcranial direct current stimulation in stroke patients 7.1 Bridging the gap between simulated electric fields and brain activation in stroke patients 7.2 Methodology for relating simulated electric fields to functional MRI data 7.3 Evaluation of the simulation study and correlation analysis 7.4 Discussion & Conclusion 8 Outlooks for simulations of transcranial electric stimulation List of Figures List of Tables Glossary of Neuroscience Terms Glossary of Technical Terms BibliographyTranscranial electric current stimulation (tES) denotes a group of brain stimulation techniques that apply a weak electric current over two or more non-invasively, head-mounted electrodes. When employing a direct-current, this method is denoted transcranial direct current stimulation (tDCS). The general aim of all tES techniques is the modulation of brain function by an up- or downregulation of brain activity. Among these, transcranial direct current stimulation is investigated as an adjuvant tool to promote processes of the microscopic reorganization of the brain as a consequence of learning and, more specifically, rehabilitation therapy after a stroke. Current challenges of this research are a high variability in the achieved stimulation effects across subjects and an incomplete understanding of the interplay between its underlying mechanisms. A key component to understanding the stimulation mechanism is considered the electric field, which is exerted by the electrodes and distributes in the subjects' heads. A principle concept assumes that brain areas exposed to a higher electric field strength likewise experience a higher stimulation. This attributes the positioning of the electrodes a decisive role for the stimulation. However, the electric field distributes non-uniformly across subjects' brains due to the heterogeneous electrical conductivity profile of the human head. Moreover, the distribution pattern is variable between subjects due to their individual anatomy. A trivial estimation of the distribution of the electric field solely based on the position of the stimulating electrodes is, therefore, not precise enough for a well-targeted stimulation. Computer-based biophysical simulations of transcranial electric stimulation enable the individual approximation of the distribution pattern of the electric field in subjects based on their medical imaging data. They are, thus, increasingly employed for the planning and verification of tDCS applications and constitute an essential tool on the way to individualized stroke rehabilitation therapy. Software pipelines facilitating the underlying individualized processing for a wide range of researchers have been developed for use in healthy adults over the past years, but, to date, the simulation of patients with abnormal brain tissue and structure disrupting lesions remains a non-trivial task. Therefore, the presented project was dedicated to establishing and practically applying a tES simulation workflow. The processing of medical imaging data of neurological patients with abnormal brain tissue was a central requirement in this process. The basic simulation workflow was first designed and validated for the simulation of healthy adults. This comprised compiling medical image processing algorithms into a comprehensive workflow to identify and extract electrically relevant physiological structures of the human head and upper torso from magnetic resonance images. The identified structures had to be converted to computational models. The underlying physical problem of electric volume conduction in biological tissue was solved by means of numeric simulation. Over the course of normal aging, the brain is subjected to structural alterations, among which a loss of brain volume and the development of microscopic alterations of its fiber structure are the most relevant. In a second step, the workflow was, thus, extended to incorporate these phenomena of normal aging. The main challenge in this subproject was the biophysical modeling of the altered brain microstructure as the resulting alterations to the conductivity profile of the brain were so far not quantified in the literature. Therefore, the augmentation of the workflow most notably included the modeling of uncertain electrical properties. With this, the influence of the uncertain electrical conductivity of the biological structures of the human head on the electric field could be assessed. In a simulation study, including imaging data of 88 subjects, the influence of the altered brain fiber structure on the electric field was then systematically investigated. These tissue alterations were found to exhibit a highly localized and generally low impact. Finally, in a third step, tDCS simulations of stroke patients were conducted. Their large, structure-disrupting lesions were modeled in a more detailed manner than in previous stroke simulation studies, and they were physically, again, modeled by uncertain electrical conductivity resulting in uncertain electric field estimates. Individually simulated electric fields were related to the brain activation of 18 patients, considering the inherently uncertain electric field estimations. The goal was to clarify whether the stimulation exerts a positive influence on brain function in the context of rehabilitation therapy supporting brain reorganization following a stroke. While a weak correlation could be established, further investigation will be necessary to answer that research question.:Kurzfassung Abstract Contents 1 Overview 2 Anatomical structures in magnetic resonance images 2 Anatomical structures in magnetic resonance images 2.1 Neuroanatomy 2.2 Magnetic resonance imaging 2.3 Segmentation of MR images 2.4 Image morphology 2.5 Summary 3 Magnetic resonance image processing pipeline 3.1 Introduction to human body modeling 3.2 Description of the processing pipeline 3.3 Intermediate and final outcomes in two subjects 3.4 Discussion, limitations & future work 3.5 Conclusion 4 Numerical simulation of transcranial electric stimulation 4.1 Electrostatic foundations 4.2 Discretization of electrostatic quantities 4.3 The numeric solution process 4.4 Spatial discretization by volume meshing 4.5 Summary 5 Simulation workflow 5.1 Overview of tES simulation pipelines 5.2 My implementation of a tES simulation workflow 5.3 Verification & application examples 5.4 Discussion & Conclusion 6 Transcranial direct current stimulation in the aging brain 6.1 Handling age-related brain changes in tES simulations 6.2 Procedure of the simulation study 6.3 Results of the uncertainty analysis 6.4 Findings, limitations and discussion 7 Transcranial direct current stimulation in stroke patients 7.1 Bridging the gap between simulated electric fields and brain activation in stroke patients 7.2 Methodology for relating simulated electric fields to functional MRI data 7.3 Evaluation of the simulation study and correlation analysis 7.4 Discussion & Conclusion 8 Outlooks for simulations of transcranial electric stimulation List of Figures List of Tables Glossary of Neuroscience Terms Glossary of Technical Terms Bibliograph

    Morphologie, Géométrie et Statistiques en imagerie non-standard

    Get PDF
    Digital image processing has followed the evolution of electronic and computer science. It is now current to deal with images valued not in {0,1} or in gray-scale, but in manifolds or probability distributions. This is for instance the case for color images or in diffusion tensor imaging (DTI). Each kind of images has its own algebraic, topological and geometric properties. Thus, existing image processing techniques have to be adapted when applied to new imaging modalities. When dealing with new kind of value spaces, former operators can rarely be used as they are. Even if the underlying notion has still a meaning, a work must be carried out in order to express it in the new context.The thesis is composed of two independent parts. The first one, "Mathematical morphology on non-standard images", concerns the extension of mathematical morphology to specific cases where the value space of the image does not have a canonical order structure. Chapter 2 formalizes and demonstrates the irregularity issue of total orders in metric spaces. The main results states that for any total order in a multidimensional vector space, there are images for which the morphological dilations and erosions are irregular and inconsistent. Chapter 3 is an attempt to generalize morphology to images valued in a set of unordered labels.The second part "Probability density estimation on Riemannian spaces" concerns the adaptation of standard density estimation techniques to specific Riemannian manifolds. Chapter 5 is a work on color image histograms under perceptual metrics. The main idea of this chapter consists in computing histograms using local Euclidean approximations of the perceptual metric, and not a global Euclidean approximation as in standard perceptual color spaces. Chapter 6 addresses the problem of non parametric density estimation when data lay in spaces of Gaussian laws. Different techniques are studied, an expression of kernels is provided for the Wasserstein metric.Le traitement d'images numériques a suivi l'évolution de l'électronique et de l'informatique. Il est maintenant courant de manipuler des images à valeur non pas dans {0,1}, mais dans des variétés ou des distributions de probabilités. C'est le cas par exemple des images couleurs où de l'imagerie du tenseur de diffusion (DTI). Chaque type d'image possède ses propres structures algébriques, topologiques et géométriques. Ainsi, les techniques existantes de traitement d'image doivent être adaptés lorsqu'elles sont appliquées à de nouvelles modalités d'imagerie. Lorsque l'on manipule de nouveaux types d'espaces de valeurs, les précédents opérateurs peuvent rarement être utilisés tel quel. Même si les notions sous-jacentes ont encore un sens, un travail doit être mené afin de les exprimer dans le nouveau contexte. Cette thèse est composée de deux parties indépendantes. La première, « Morphologie mathématiques pour les images non standards », concerne l'extension de la morphologie mathématique à des cas particuliers où l'espace des valeurs de l'image ne possède pas de structure d'ordre canonique. Le chapitre 2 formalise et démontre le problème de l'irrégularité des ordres totaux dans les espaces métriques. Le résultat principal de ce chapitre montre qu'étant donné un ordre total dans un espace vectoriel multidimensionnel, il existe toujours des images à valeur dans cet espace tel que les dilatations et les érosions morphologiques soient irrégulières et incohérentes. Le chapitre 3 est une tentative d'extension de la morphologie mathématique aux images à valeur dans un ensemble de labels non ordonnés.La deuxième partie de la thèse, « Estimation de densités de probabilités dans les espaces de Riemann » concerne l'adaptation des techniques classiques d'estimation de densités non paramétriques à certaines variétés Riemanniennes. Le chapitre 5 est un travail sur les histogrammes d'images couleurs dans le cadre de métriques perceptuelles. L'idée principale de ce chapitre consiste à calculer les histogrammes suivant une approximation euclidienne local de la métrique perceptuelle, et non une approximation globale comme dans les espaces perceptuels standards. Le chapitre 6 est une étude sur l'estimation de densité lorsque les données sont des lois Gaussiennes. Différentes techniques y sont analysées. Le résultat principal est l'expression de noyaux pour la métrique de Wasserstein
    corecore