9 research outputs found

    Gap Filling of 3-D Microvascular Networks by Tensor Voting

    Get PDF
    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to ïŹll the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated

    GAP FILLING IN ROAD EXTRACTION USING RADON TRANSFORMATION

    Get PDF

    Vessel tractography using an intensity based tensor model with branch detection

    Get PDF
    In this paper, we present a tubular structure seg- mentation method that utilizes a second order tensor constructed from directional intensity measurements, which is inspired from diffusion tensor image (DTI) modeling. The constructed anisotropic tensor which is fit inside a vessel drives the segmen- tation analogously to a tractography approach in DTI. Our model is initialized at a single seed point and is capable of capturing whole vessel trees by an automatic branch detection algorithm developed in the same framework. The centerline of the vessel as well as its thickness is extracted. Performance results within the Rotterdam Coronary Artery Algorithm Evaluation framework are provided for comparison with existing techniques. 96.4% average overlap with ground truth delineated by experts is obtained in addition to other measures reported in the paper. Moreover, we demonstrate further quantitative results over synthetic vascular datasets, and we provide quantitative experiments for branch detection on patient Computed Tomography Angiography (CTA) volumes, as well as qualitative evaluations on the same CTA datasets, from visual scores by a cardiologist expert

    Vessel tractography using an intensity based tensor model

    Get PDF
    In the last decade, CAD (Coronary Artery Disease) has been the leading cause of death worldwide [1]. Extraction of arteries is a crucial step for accurate visualization, quantification, and tracking of pathologies. However, coronary artery segmentation is one of the most challenging problems in medical image analysis, since arteries are complex tubular structures with bifurcations, and have possible pathologies. Moreover, appearance of blood vessels and their geometry can be perturbed by stents, calcifications and pathologies such as stenosis. Besides, noise, contrast and resolution artifacts can make the problem more challenging. In this thesis, we present a novel tubular structure segmentation method based on an intensity-based tensor that fits to a vessel, which is inspired from diffusion tensor image (DTI) modeling. The anisotropic tensor inside the vessel drives the segmentation analogously to a tractography approach in DTI. Our model is initialized with a single seed point and it is capable of capturing whole vessel tree by an automatic branch detection algorithm. The centerline of the vessel as well as its thickness is extracted. We demonstrate the performance of our algorithm on 3 complex tubular structured synthetic datasets, and on 8 CTA (Computed Tomography Angiography) datasets (from Rotterdam Coronary Artery Algorithm Evaluation Framework) for quantitative validation. Additionally, extracted arteries from 10 CTA volumes are qualitatively evaluated by a cardiologist expert's visual scores

    Segmentation automatique des images de fibres d’ADN pour la quantification du stress rĂ©plicatif

    Get PDF
    La rĂ©plication de l’ADN est un processus complexe gĂ©rĂ© par une multitude d’interactions molĂ©culaires permettant une transmission prĂ©cise de l’information gĂ©nĂ©tique de la cellule mĂšre vers les cellules filles. Parmi les facteurs pouvant porter atteinte Ă  la fidĂ©litĂ© de ce processus, on trouve le Stress RĂ©plicatif. Il s’agit de l’ensemble des phĂ©nomĂšnes entraĂźnant le ralentissement voire l’arrĂȘt anormal des fourches de rĂ©plication. S’il n’est pas maĂźtrisĂ©, le stress rĂ©plicatif peut causer des ruptures du double brin d’ADN ce qui peut avoir des consĂ©quences graves sur la stabilitĂ© du gĂ©nome, la survie de la cellule et conduire au dĂ©veloppement de cancers, de maladies neurodĂ©gĂ©nĂ©ratives ou d’autres troubles du dĂ©veloppement. Il existe plusieurs techniques d’imagerie de l’ADN par fluorescence permettant l’évaluation de la progression des fourches de rĂ©plication au niveau molĂ©culaire. Ces techniques reposent sur l’incorporation d’analogues de nuclĂ©otides tels que chloro- (CldU), iodo- (IdU), ou bromo-deoxyuridine (BrdU) dans le double brin en cours de rĂ©plication. L’expĂ©rience la plus classique repose sur l’incorporation successive de deux types d’analogues de nuclĂ©otides (IdU et CldU) dans le milieu cellulaire. Une fois ces nuclĂ©otides exogĂšnes intĂ©grĂ©s dans le double brin de l’ADN rĂ©pliquĂ©, on lyse les cellules et on rĂ©partit l’ADN sur une lame de microscope. Les brins contenant les nuclĂ©otides exogĂšnes peuvent ĂȘtre imagĂ©s par immunofluorescence. L’image obtenue est constituĂ©e de deux couleurs correspondant Ă  chacun des deux types d’analogues de nuclĂ©otides. La mesure des longueurs de chaque section fluorescente permet la quantification de la vitesse de progression des fourches de rĂ©plication et donc l’évaluation des effets du stress rĂ©plicatif. La mesure de la longueur des fibres fluorescentes d’ADN est gĂ©nĂ©ralement rĂ©alisĂ©e manuellement. Cette opĂ©ration, en plus d’ĂȘtre longue et fastidieuse, peut ĂȘtre sujette Ă  des variations inter- et intra- opĂ©rateurs provenant principalement de dĂ©fĂ©rences dans le choix des fibres. La dĂ©tection des fibres d’ADN est difficile car ces derniĂšres sont souvent fragmentĂ©es en plusieurs morceaux espacĂ©s et peuvent s’enchevĂȘtrer en agrĂ©gats. De plus, les fibres sont parfois difficile Ă  distinguer du bruit en arriĂšre-plan causĂ© par les liaisons non-spĂ©cifiques des anticorps fluorescents. MalgrĂ© la profusion des algorithmes de segmentation de structures curvilignes (vaisseaux sanguins, rĂ©seaux neuronaux, routes, fissures sur bĂ©ton...), trĂšs peu de travaux sont dĂ©diĂ©s au traitement des images de fibres d’ADN. Nous avons mis au point un algorithme intitulĂ© ADFA (Automated DNA Fiber Analysis) permettant la segmentation automatique des fibres d’ADN ainsi que la mesure de leur longueur respective. Cet algorithme se divise en trois parties : (i) Une extraction des objets de l’image par analyse des contours. Notre mĂ©thode de segmentation des contours se basera sur des techniques classiques d’analyse du gradient de l’image (Marr-Hildreth et de Canny). (ii) Un prolongement des objets adjacents afin de fusionner les fibres fragmentĂ©es. Nous avons dĂ©veloppĂ© une mĂ©thode de suivi (tracking) basĂ©e sur l’orientation et la continuitĂ© des objets adjacents. (iii) Une dĂ©termination du type d’analogue de nuclĂ©otide par comparaison des couleurs. Pour ce faire, nous analyserons les deux canaux (vert et rouge) de l’image le long de chaque fibre. Notre algorithme a Ă©tĂ© testĂ© sur un grand nombre d’images de qualitĂ© variable et acquises Ă  partir de diffĂ©rents contextes de stress rĂ©plicatif. La comparaison entre ADFA et plusieurs opĂ©rateurs humains montre une forte adĂ©quation entre les deux approches Ă  la fois Ă  l’échelle de chaque fibre et Ă  l’échelle plus globale de l’image. La comparaison d’échantillons soumis ou non soumis Ă  un stress rĂ©plicatif a aussi permis de valider les performances de notre algorithme. Enfin, nous avons Ă©tudiĂ© l’impact du temps d’incubation du second analogue de nuclĂ©otide sur les rĂ©sultats de l’algorithme. Notre algorithme est particuliĂšrement efficace sur des images contenant des fibres d’ADN relativement courtes et peu fractionnĂ©es. En revanche, notre mĂ©thode de suivi montre des limites lorsqu’il s’agit de fusionner correctement de longues fibres fortement fragmentĂ©es et superposĂ©es Ă  d’autres brins. Afin d’optimiser les performances d’ADFA, nous recommandons des temps d’incubation courts (20 Ă  30 minutes) pour chaque analogue de nuclĂ©otide dans le but d’obtenir des fibres courtes. Nous recommandons aussi de favoriser la dilution des brins sur la lame de microscope afin d’éviter la formation d’agrĂ©gats de fibres difficiles Ă  distinguer. ADFA est disponible en libre accĂšs et a pour vocation de servir de rĂ©fĂ©rence pour la mesure des brins d’ADN afin de pallier les problĂšmes de variabilitĂ©s inter-opĂ©rateurs.----------ABSTRACTDNA replication is tightly regulated by a great number of molecular interactions that ensure accurate transmission of genetic information to daughter cells. Replicative Stress refers to all the processes undermining the fidelity of DNA replication by slowing down or stalling DNA replication forks. Indeed, stalled replication forks may “collapse” into highly-genotoxic double strand breaks (DSB) which engender chromosomal rearrangements and genomic instability. Thus, replicative stress can constitute a critical determinant in both cancer development and treatment. Replicative stress is also implicated in the molecular pathogenesis of aging and neurodegenerative disease, as well as developmental disorders. Several fluorescence imaging techniques enable the evaluation of replication forks progression at the level of individual DNA molecules. Those techniques rely on the incorporation of exogene nucleotide analogs in nascent DNA at replication forks in living cells. In a typical experiment, sequential incorporation of two nucleotide analogs, e.g., IdU and CldU, is performed. Following cell lysis and spreading of DNA on microscopy slides, DNA molecules are then imaged by immunofluorescence. The obtained image is made up of two colors corresponding to each one of the two nucleotide analogs. Measurement of the respective lengths of these labeled stretches of DNA permits quantification of replication fork progression. Evaluation of DNA fiber length is generally performed manually. This procedure is laborious and subject to inter- and intra-user variability stemming in part from unintended bias in the choice of fibers to be measured. DNA fiber extraction is difficult because strands are often fragmented in lots of subparts and can be tangled in clusters. Moreover, the extraction of fibers can be difficult when the background is noised by non specific staining. Despite the large number of segmentation algorithms dedicated to curvilinear structures (blood vessels, neural networks, roads, concrete tracks...), few studies address the treatment of DNA fiber images. We developed an algorithm called ADFA (Automated DNA Fiber Analysis) which automatically segments DNA fibers and measures their respective length. Our approach can be divided into three parts: 1. Object extraction by a robust contour detection. Our contour segmentation method relies on two classical gradient analyses (Marr and Hildreth, 1980; Canny, 1986) 2. Fusion of adjacent fragmented fibers by analysing their continuity. We developped a tracking approach based on the orientation and the continuity of adjacent fibers. 3. Detection of the nucleotide analog label (IdU or CldU). To do so, we analyse the color profile on both channels (green and red) along each fiber. ADFA was tested on a database of different images of varying quality, signal to noise ratio, or fiber length which were acquired from two different microscopes. The comparison between ADFA and manual segmentations shows a high correlation both at the scale of the fiber and at the scale of the image. Moreover, we validate our algorithm by comparing samples submitted to replicative stress and controls. Finally, we studied the impact of the incubation time of the second nucleotide analog pulse. The performances of our algorithm are optimised for images containing relatively short and not fragmented DNA fibers. Our tracking methods may be limited when connecting highly split fibers superimposed to other strands. Therefore, we recommend to reduce the incubation time of each nucleotide analog to about 20-30 minutes in order to obtain short fibers. We also recommend to foster the dilution of fibers on the slide to reduce clustering of fluorescent DNA molecules. ADFA is freely available as an open-source software. It might be used as a reference tool to solve inter-intra user variability

    DĂ©veloppement d’outils de vectorisation d’angiographies obtenues par microscopie 2-photons dans le contexte du vieillissement du cerveau

    Get PDF
    RÉSUMÉ Les pathologies affectant les petits vaisseaux de la neurovasculature sont-elles Ă  l’origine des effets cognitifs qui apparaissent au cours du vieillissement? Pour rĂ©pondre Ă  cette question, il faut d’abord possĂ©der des outils permettant d’extraire la microvasculature du nĂ©ocortex Ă  partir d’angiographies acquises par microscopie Ă  fluorescence deux-photons. Une meilleure comprĂ©hension de l’évolution de la microvasculature du cerveau avec l’ñge et de l’effet de ces modifications sur les fonctions des aires corticales constitue en effet une Ă©tape essentielle vers la mise en place de nouveaux biomarqueurs du vieillissement du cerveau. Des modĂšles rĂ©alistes de la vasculature du cerveau peuvent servir de base Ă  des modĂ©lisations du dĂ©bit sanguin et Ă  la simulation de signal IRM. En utilisant des vasculatures vectorisĂ©es, de nouvelles voies de recherche pourront donc ĂȘtre explorĂ©es, dont l’effet de diffĂ©rents types de pathologies des petits vaisseaux sur le signal IRM dĂ©pendant du niveau d’oxygĂšne sanguin (BOLD). L’objectif de ce projet de recherche est donc le dĂ©veloppement d’une mĂ©thode de segmentation des vaisseaux sanguins et d’un outil d’interaction permettant de corriger et de modifier le rĂ©seau vasculaire extrait. Ces outils sont utilisĂ©s pour comparer la microvasculature du nĂ©ocortex de rats provenant de deux cohortes d’ñges diffĂ©rents formĂ©s de 12 jeunes rats (Ăąge = 11-15 semaines) et 12 rats ĂągĂ©s (Ăąge = 23-25 mois) de type Long-Evans. Ces mĂ©thodes ont Ă©tĂ© dĂ©veloppĂ©es en utilisant principalement la plateforme de programmation MATLAB, le module de gestion de pipeline de traitement PSOM et l’outil de traitement d’image FIJI. La mĂ©thode est semi-automatique, nĂ©cessitant une correction manuelle des graphes extraits des angiographies aprĂšs la segmentation. L’approche modulable adoptĂ©e permet l’ajout de nouvelles fonctions et de nouveaux outils, ce qui pourra amĂ©liorer sa robustesse et l’automatisation de l’extraction des vaisseaux sanguins. En analysant les masques des vasculatures issus du prĂ©traitement des donnĂ©es, il a Ă©tĂ© montrĂ© que la densitĂ© des capillaires dans le nĂ©ocortex sensorimoteur de rats Long-Evans diminue avec l’ñge, en passant de pour les jeunes rats Ă  pour les rats ĂągĂ©s, ce qui reprĂ©sente une baisse statistiquement significative de 20 %. Une analyse utilisant les graphes nettoyĂ©s semble Ă©galement aller dans ce sens en montrant que la densitĂ© linĂ©aire des vaisseaux dĂ©croĂźt au cours du vieillissement. Cette mesure est liĂ©e aux capacitĂ©s de perfusion de la vasculature, et pourrait indiquer que l’efficacitĂ© d’apport en nutriment et en oxygĂšne dĂ©croĂźt dans le nĂ©ocortex sensorimoteur de rats au cours du vieillissement.----------ABSTRACT Are the conditions affecting the small vessels of the neurovasculature the cause of cognitive impairments that appear with aging? To answer this question, we must have tools to extract the neocortex microvasculature from angiograms acquired by two-photon fluorescence microscopy. A better understanding of the brain microvasculature evolution with age and the effect of those changes on the cortical areas functions is indeed an essential step towards the development of new biomarkers of brain aging. Realistic models of the brain vasculature can be used as a basis of blood flow modeling and to simulate MRI signal originating from these vessels. Using vectorized vasculatures, new research avenues can be explored, including the effect of different types of small vessels diseases on blood oxygen level-dependent (BOLD) MRI signal. The main objective of this research project is the development of a blood vessel segmentation method and of an interface to correct and modify the extracted vascular networks. These tools are used to compare the neocortex microvasculature of rats from two different age cohorts. These cohorts consist of 12 young (age = 11-15 weeks) and 12 old Long-Evans rats (age = 23-25 months). The tools have been developed using the MATLAB programming platform, the pipeline processing module PSOM and the image processing tool FIJI. The method is semi-automatic, requiring manual correction of the extracted angiogram graphs after segmentation. The modular approach allows the addition of new features and tools, which can improve the robustness and automation of the blood vessels extraction technique. By analyzing the vasculature masks obtained by the initial data preprocessing, it is found that the density of capillaries in the sensorimotor neocortex of Long-Evans rats decreases with age, from ρ = 6.8 ± 0.3 [%] in young rats to ρ = 5.4 ± 0.3 [%] in aged rats, which represents a statistically significant decrease of 20%. An analysis using the cleaned graphs also seems to go in this direction by showing that the linear density of vessels decreases with aging. This density is linked to the perfusion capacity of the vasculature, and may indicate that the efficiency of nutrient and oxygen distribution decreases with aging in rat’s sensorimotor neocortex

    Robust perceptual organization techniques for analysis of color images

    Get PDF
    Esta tesis aborda el desarrollo de nuevas tĂ©cnicas de anĂĄlisis robusto de imĂĄgenes estrechamente relacionadas con el comportamiento del sistema visual humano. Uno de los pilares de la tesis es la votaciĂłn tensorial, una tĂ©cnica robusta que propaga y agrega informaciĂłn codificada en tensores mediante un proceso similar a la convoluciĂłn. Su robustez y adaptabilidad han sido claves para su uso en esta tesis. Ambas propiedades han sido verificadas en tres nuevas aplicaciones de la votaciĂłn tensorial: estimaciĂłn de estructura, detecciĂłn de bordes y segmentaciĂłn de imĂĄgenes adquiridas mediante estereovisiĂłn.El mayor problema de la votaciĂłn tensorial es su elevado coste computacional. En esta lĂ­nea, esta tesis propone dos nuevas implementaciones eficientes de la votaciĂłn tensorial derivadas de un anĂĄlisis en profundidad de esta tĂ©cnica.A pesar de su capacidad de adaptaciĂłn, esta tesis muestra que la formulaciĂłn original de la votaciĂłn tensorial (a partir de aquĂ­, votaciĂłn tensorial clĂĄsica) no es adecuada para algunas aplicaciones, dado que las hipĂłtesis en las que se basa no se ajustan a todas ellas. Esto ocurre particularmente en el filtrado de imĂĄgenes en color. AsĂ­, esta tesis muestra que, mĂĄs que un mĂ©todo, la votaciĂłn tensorial es una metodologĂ­a en la que la codificaciĂłn y el proceso de votaciĂłn pueden ser adaptados especĂ­ficamente para cada aplicaciĂłn, manteniendo el espĂ­ritu de la votaciĂłn tensorial.En esta lĂ­nea, esta tesis propone un marco unificado en el que se realiza a la vez el filtrado de imĂĄgenes y la detecciĂłn robusta de bordes. Este marco de trabajo es una extensiĂłn de la votaciĂłn tensorial clĂĄsica en la que el color y la probabilidad de encontrar un borde en cada pĂ­xel se codifican mediante tensores, y en el que el proceso de votaciĂłn se basa en un conjunto de criterios perceptuales relacionados con el modo en que el sistema visual humano procesa informaciĂłn. Los avances recientes en la percepciĂłn del color han sido esenciales en el diseño de dicho proceso de votaciĂłn.Este nuevo enfoque ha sido efectivo, obteniendo excelentes resultados en ambas aplicaciones. En concreto, el nuevo mĂ©todo aplicado al filtrado de imĂĄgenes tiene un mejor rendimiento que los mĂ©todos del estado del arte para ruido real. Esto lo hace mĂĄs adecuado para aplicaciones reales, donde los algoritmos de filtrado son imprescindibles. AdemĂĄs, el mĂ©todo aplicado a detecciĂłn de bordes produce resultados mĂĄs robustos que las tĂ©cnicas del estado del arte y tiene un rendimiento competitivo con relaciĂłn a la completitud, discriminabilidad, precisiĂłn y rechazo de falsas alarmas.AdemĂĄs, esta tesis demuestra que este nuevo marco de trabajo puede combinarse con otras tĂ©cnicas para resolver el problema de segmentaciĂłn robusta de imĂĄgenes. Los tensores obtenidos mediante el nuevo mĂ©todo se utilizan para clasificar pĂ­xeles como probablemente homogĂ©neos o no homogĂ©neos. Ambos tipos de pĂ­xeles se segmentan a continuaciĂłn por medio de una variante de un algoritmo eficiente de segmentaciĂłn de imĂĄgenes basada en grafos. Los experimentos muestran que el algoritmo propuesto obtiene mejores resultados en tres de las cinco mĂ©tricas de evaluaciĂłn aplicadas en comparaciĂłn con las tĂ©cnicas del estado del arte, con un coste computacional competitivo.La tesis tambiĂ©n propone nuevas tĂ©cnicas de evaluaciĂłn en el ĂĄmbito del procesamiento de imĂĄgenes. En concreto, se proponen dos mĂ©tricas de filtrado de imĂĄgenes con el fin de medir el grado en que un mĂ©todo es capaz de preservar los bordes y evitar la introducciĂłn de defectos. Asimismo, se propone una nueva metodologĂ­a para la evaluaciĂłn de detectores de bordes que evita posibles sesgos introducidos por el post-procesado. Esta metodologĂ­a se basa en cinco mĂ©tricas para estimar completitud, discriminabilidad, precisiĂłn, rechazo de falsas alarmas y robustez. Por Ășltimo, se proponen dos nuevas mĂ©tricas no paramĂ©tricas para estimar el grado de sobre e infrasegmentaciĂłn producido por los algoritmos de segmentaciĂłn de imĂĄgenes.This thesis focuses on the development of new robust image analysis techniques more closely related to the way the human visual system behaves. One of the pillars of the thesis is the so called tensor voting technique. This is a robust perceptual organization technique that propagates and aggregates information encoded by means of tensors through a convolution like process. Its robustness and adaptability have been one of the key points for using tensor voting in this thesis. These two properties are verified in the thesis by applying tensor voting to three applications where it had not been applied so far: image structure estimation, edge detection and image segmentation of images acquired through stereo vision.The most important drawback of tensor voting is that its usual implementations are highly time consuming. In this line, this thesis proposes two new efficient implementations of tensor voting, both derived from an in depth analysis of this technique.Despite its adaptability, this thesis shows that the original formulation of tensor voting (hereafter, classical tensor voting) is not adequate for some applications, since the hypotheses from which it is based are not suitable for all applications. This is particularly certain for color image denoising. Thus, this thesis shows that, more than a method, tensor voting can be thought of as a methodology in which the encoding and voting process can be tailored for every specific application, while maintaining the tensor voting spirit.By following this reasoning, this thesis proposes a unified framework for both image denoising and robust edge detection.This framework is an extension of the classical tensor voting in which both color and edginess the likelihood of finding an edge at every pixel of the image are encoded through tensors, and where the voting process takes into account a set of plausible perceptual criteria related to the way the human visual system processes visual information. Recent advances in the perception of color have been essential for designing such a voting process.This new approach has been found effective, since it yields excellent results for both applications. In particular, the new method applied to image denoising has a better performance than other state of the art methods for real noise. This makes it more adequate for real applications, in which an image denoiser is indeed required. In addition, the method applied to edge detection yields more robust results than the state of the art techniques and has a competitive performance in recall, discriminability, precision, and false alarm rejection.Moreover, this thesis shows how the results of this new framework can be combined with other techniques to tackle the problem of robust color image segmentation. The tensors obtained by applying the new framework are utilized to classify pixels into likely homogeneous and likely inhomogeneous. Those pixels are then sequentially segmented through a variation of an efficient graph based image segmentation algorithm. Experiments show that the proposed segmentation algorithm yields better scores in three of the five applied evaluation metrics when compared to the state of the art techniques with a competitive computational cost.This thesis also proposes new evaluation techniques in the scope of image processing. First, two new metrics are proposed in the field of image denoising: one to measure how an algorithm is able to preserve edges, and the second to measure how a method is able not to introduce undesirable artifacts. Second, a new methodology for assessing edge detectors that avoids possible bias introduced by post processing is proposed. It consists of five new metrics for assessing recall, discriminability, precision, false alarm rejection and robustness. Finally, two new non parametric metrics are proposed for estimating the degree of over and undersegmentation yielded by image segmentation algorithms

    Ultraschallbasierte Navigation fĂŒr die minimalinvasive onkologische Nieren- und Leberchirurgie

    Get PDF
    In der minimalinvasiven onkologischen Nieren- und Leberchirurgie mit vielen Vorteilen fĂŒr den Pa- tienten wird der Chirurg hĂ€ufig mit Orientierungsproblemen konfrontiert. Hauptursachen hierfĂŒr sind die indirekte Sicht auf die Patientenanatomie, das eingeschrĂ€nkte Blickfeld und die intra- operative Deformation der Organe. Abhilfe können Navigationssysteme schaffen, welche hĂ€ufig auf intraoperativem Ultraschall basieren. Durch die Echtzeit-Bildgebung kann die Deformation des Organs bestimmt werden. Da viele Tumore im Schallbild nicht sichtbar sind, wird eine robuste automatische und deformierbare Registrierung mit dem prĂ€operativen CT benötigt. Ferner ist eine permanente Visualisierung auch wĂ€hrend der Manipulation am Organ notwendig. FĂŒr die Niere wurde die Eignung von Ultraschall-Elastographieaufnahmen fĂŒr die bildbasierte Re- gistrierung unter Verwendung der Mutual Information evaluiert. Aufgrund schlechter BildqualitĂ€t und geringer Ausdehnung der Bilddaten hatte dies jedoch nur mĂ€ĂŸigen Erfolg. Die Verzweigungspunkte der BlutgefĂ€ĂŸe in der Leber werden als natĂŒrliche Landmarken fĂŒr die Registrierung genutzt. DafĂŒr wurden GefĂ€ĂŸsegmentierungsalgorithmen fĂŒr die beiden hĂ€ufigsten Arten der Ultraschallbildgebung B-Mode und Power Doppler entwickelt. Die vorgeschlagene Kom- bination beider ModalitĂ€ten steigerte die Menge an GefĂ€ĂŸverzweigungen im Mittel um 35 %. FĂŒr die rigide Registrierung der GefĂ€ĂŸe aus dem Ultraschall und CT werden mithilfe eines bestehen- den Graph Matching Verfahrens [OLD11b] im Mittel 9 bijektive Punktkorrespondenzen definiert. Die mittlere Registrierungsgenauigkeit liegt bei 3,45 mm. Die Menge an Punktkorrespondenzen ist fĂŒr eine deformierbare Registrierung nicht ausreichend. Das entwickelte Verfahren zur Landmarkenverfeinerung fĂŒgt zwischen gematchten Punkte weitere Landmarken entlang der GefĂ€ĂŸmittellinien ein und sucht nach weiteren korrespondierenden GefĂ€ĂŸ- segmenten wodurch die Zahl der Punktkorrespondenzen im Mittel auf 70 gesteigert wird. Dies erlaubt die Bestimmung der Organdeformation anhand des unterschiedlichen GefĂ€ĂŸverlaufes. Anhand dieser Punktkorrespondenzen kann mithilfe der Thin-Plate-Splines ein Deformationsfeld fĂŒr das gesamte Organ berechnet werden. Auf diese Weise wird die Genauigkeit der Registrierung im Mittel um 44 % gesteigert. Die wichtigste Voraussetzung fĂŒr das Gelingen der deformierbaren Registrierung ist eine möglichst umfassende Segmentierung der GefĂ€ĂŸe aus dem Ultraschall. Im Rahmen der Arbeit wurde erstmals der Begriff der Regmentation auf die Segmentierung von GefĂ€ĂŸen und die gefĂ€ĂŸbasierte Registrie- rung ausgeweitet. Durch diese Kombination beider Verfahren wurde die extrahierte GefĂ€ĂŸlĂ€nge im Mittel um 32 % gesteigert, woraus ein Anstieg der Anzahl korrespondierender Landmarken auf 98 resultiert. Hierdurch lĂ€sst sich die Deformation des Organs und somit auch die LageverĂ€nderung des Tumors genauer und mit höherer Sicherheit bestimmen. Mit dem Wissen ĂŒber die Lage des Tumors im Organ und durch Verwendung eines Markierungs- drahtes kann die LageverĂ€nderung des Tumors wĂ€hrend der chirurgischen Manipulation mit einem elektromagnetischen Trackingsystem ĂŒberwacht werden. Durch dieses Tumortracking wird eine permanente Visualisierung mittels Video Overlay im laparoskopischen Videobild möglich. Die wichtigsten BeitrĂ€ge dieser Arbeit zur gefĂ€ĂŸbasierten Registrierung sind die GefĂ€ĂŸsegmen- tierung aus Ultraschallbilddaten, die Landmarkenverfeinerung zur Gewinnung einer hohen Anzahl bijektiver Punktkorrespondenzen und die EinfĂŒhrung der Regmentation zur Verbesserung der Ge- fĂ€ĂŸsegmentierung und der deformierbaren Registrierung. Das Tumortracking fĂŒr die Navigation ermöglicht die permanente Visualisierung des Tumors wĂ€hrend des gesamten Eingriffes
    corecore