30 research outputs found

    A Deep Learning Approach to Evaluating Disease Risk in Coronary Bifurcations

    Full text link
    Cardiovascular disease represents a large burden on modern healthcare systems, requiring significant resources for patient monitoring and clinical interventions. It has been shown that the blood flow through coronary arteries, shaped by the artery geometry unique to each patient, plays a critical role in the development and progression of heart disease. However, the popular and well tested risk models such as Framingham and QRISK3 current cardiovascular disease risk models are not able to take these differences when predicting disease risk. Over the last decade, medical imaging and image processing have advanced to the point that non-invasive high-resolution 3D imaging is routinely performed for any patient suspected of coronary artery disease. This allows for the construction of virtual 3D models of the coronary anatomy, and in-silico analysis of blood flow within the coronaries. However, several challenges still exist which preclude large scale patient-specific simulations, necessary for incorporating haemodynamic risk metrics as part of disease risk prediction. In particular, despite a large amount of available coronary medical imaging, extraction of the structures of interest from medical images remains a manual and laborious task. There is significant variation in how geometric features of the coronary arteries are measured, which makes comparisons between different studies difficult. Modelling blood flow conditions in the coronary arteries likewise requires manual preparation of the simulations and significant computational cost. This thesis aims to solve these challenges. The "Automated Segmentation of Coronary Arteries (ASOCA)" establishes a benchmark dataset of coronary arteries and their associated 3D reconstructions, which is currently the largest openly available dataset of coronary artery models and offers a wide range of applications such as computational modelling, 3D printed for experiments, developing, and testing medical devices such as stents, and Virtual Reality applications for education and training. An automated computational modelling workflow is developed to set up, run and postprocess simulations on the Left Main Bifurcation and calculate relevant shape metrics. A convolutional neural network model is developed to replace the computational fluid dynamics process, which can predict haemodynamic metrics such as wall shear stress in minutes, compared to several hours using traditional computational modelling reducing the computation and labour cost involved in performing such simulations

    CT Scanning

    Get PDF
    Since its introduction in 1972, X-ray computed tomography (CT) has evolved into an essential diagnostic imaging tool for a continually increasing variety of clinical applications. The goal of this book was not simply to summarize currently available CT imaging techniques but also to provide clinical perspectives, advances in hybrid technologies, new applications other than medicine and an outlook on future developments. Major experts in this growing field contributed to this book, which is geared to radiologists, orthopedic surgeons, engineers, and clinical and basic researchers. We believe that CT scanning is an effective and essential tools in treatment planning, basic understanding of physiology, and and tackling the ever-increasing challenge of diagnosis in our society

    Forensic Medicine

    Get PDF
    Forensic medicine is a continuously evolving science that is constantly being updated and improved, not only as a result of technological and scientific advances (which bring almost immediate repercussions) but also because of developments in the social and legal spheres. This book contains innovative perspectives and approaches to classic topics and problems in forensic medicine, offering reflections about the potential and limits of emerging areas in forensic expert research; it transmits the experience of some countries in the domain of cutting-edge expert intervention, and shows how research in other fields of knowledge may have very relevant implications for this practice

    Anatomical Modeling of Cerebral Microvascular Structures: Application to Identify Biomarkers of Microstrokes

    Get PDF
    Les réseaux microvasculaires corticaux sont responsables du transport de l’oxygène et des substrats énergétiques vers les neurones. Ces réseaux réagissent dynamiquement aux demandes énergétiques lors d’une activation neuronale par le biais du couplage neurovasculaire. Afin d’élucider le rôle de la composante microvasculaire dans ce processus de couplage, l’utilisation de la modélisation in-formatique pourrait se révéler un élément clé. Cependant, la manque de méthodologies de calcul appropriées et entièrement automatisées pour modéliser et caractériser les réseaux microvasculaires reste l’un des principaux obstacles. Le développement d’une solution entièrement automatisée est donc important pour des explorations plus avancées, notamment pour quantifier l’impact des mal-formations vasculaires associées à de nombreuses maladies cérébrovasculaires. Une observation courante dans l’ensemble des troubles neurovasculaires est la formation de micro-blocages vascu-laires cérébraux (mAVC) dans les artérioles pénétrantes de la surface piale. De récents travaux ont démontré l’impact de ces événements microscopiques sur la fonction cérébrale. Par conséquent, il est d’une importance vitale de développer une approche non invasive et comparative pour identifier leur présence dans un cadre clinique. Dans cette thèse,un pipeline de traitement entièrement automatisé est proposé pour aborder le prob-lème de la modélisation anatomique microvasculaire. La méthode de modélisation consiste en un réseau de neurones entièrement convolutif pour segmenter les capillaires sanguins, un générateur de modèle de surface 3D et un algorithme de contraction de la géométrie pour produire des mod-èles graphiques vasculaires ne comportant pas de connections multiples. Une amélioration de ce pipeline est développée plus tard pour alléger l’exigence de maillage lors de la phase de représen-tation graphique. Un nouveau schéma permettant de générer un modèle de graphe est développé avec des exigences d’entrée assouplies et permettant de retenir les informations sur les rayons des vaisseaux. Il est inspiré de graphes géométriques déformants construits en respectant les morpholo-gies vasculaires au lieu de maillages de surface. Un mécanisme pour supprimer la structure initiale du graphe à chaque exécution est implémenté avec un critère de convergence pour arrêter le pro-cessus. Une phase de raffinement est introduite pour obtenir des modèles vasculaires finaux. La modélisation informatique développée est ensuite appliquée pour simuler les signatures IRM po-tentielles de mAVC, combinant le marquage de spin artériel (ASL) et l’imagerie multidirectionnelle pondérée en diffusion (DWI). L’hypothèse est basée sur des observations récentes démontrant une réorientation radiale de la microvascularisation dans la périphérie du mAVC lors de la récupéra-tion chez la souris. Des lits capillaires synthétiques, orientés aléatoirement et radialement, et des angiogrammes de tomographie par cohérence optique (OCT), acquis dans le cortex de souris (n = 5) avant et après l’induction d’une photothrombose ciblée, sont analysés. Les graphes vasculaires informatiques sont exploités dans un simulateur 3D Monte-Carlo pour caractériser la réponse par résonance magnétique (MR), tout en considérant les effets des perturbations du champ magnétique causées par la désoxyhémoglobine, et l’advection et la diffusion des spins nucléaires. Le pipeline graphique proposé est validé sur des angiographies synthétiques et réelles acquises avec différentes modalités d’imagerie. Comparé à d’autres méthodes effectuées dans le milieu de la recherche, les expériences indiquent que le schéma proposé produit des taux d’erreur géométriques et topologiques amoindris sur divers angiogrammes. L’évaluation confirme également l’efficacité de la méthode proposée en fournissant des modèles représentatifs qui capturent tous les aspects anatomiques des structures vasculaires. Ensuite, afin de trouver des signatures de mAVC basées sur le signal IRM, la modélisation vasculaire proposée est exploitée pour quantifier le rapport de perte de signal intravoxel minimal lors de l’application de plusieurs directions de gradient, à des paramètres de séquence variables avec et sans ASL. Avec l’ASL, les résultats démontrent une dif-férence significative (p <0,05) entre le signal calculé avant et 3 semaines après la photothrombose. La puissance statistique a encore augmenté (p <0,005) en utilisant des angiogrammes capturés à la semaine suivante. Sans ASL, aucun changement de signal significatif n’est trouvé. Des rapports plus élevés sont obtenus à des intensités de champ magnétique plus faibles (par exemple, B0 = 3) et une lecture TE plus courte (<16 ms). Cette étude suggère que les mAVC pourraient être carac-térisés par des séquences ASL-DWI, et fournirait les informations nécessaires pour les validations expérimentales postérieures et les futurs essais comparatifs.----------ABSTRACT Cortical microvascular networks are responsible for carrying the necessary oxygen and energy substrates to our neurons. These networks react to the dynamic energy demands during neuronal activation through the process of neurovascular coupling. A key element in elucidating the role of the microvascular component in the brain is through computational modeling. However, the lack of fully-automated computational frameworks to model and characterize these microvascular net-works remains one of the main obstacles. Developing a fully-automated solution is thus substantial for further explorations, especially to quantify the impact of cerebrovascular malformations associ-ated with many cerebrovascular diseases. A common pathogenic outcome in a set of neurovascular disorders is the formation of microstrokes, i.e., micro occlusions in penetrating arterioles descend-ing from the pial surface. Recent experiments have demonstrated the impact of these microscopic events on brain function. Hence, it is of vital importance to develop a non-invasive and translatable approach to identify their presence in a clinical setting. In this thesis, a fully automatic processing pipeline to address the problem of microvascular anatom-ical modeling is proposed. The modeling scheme consists of a fully-convolutional neural network to segment microvessels, a 3D surface model generator and a geometry contraction algorithm to produce vascular graphical models with a single connected component. An improvement on this pipeline is developed later to alleviate the requirement of water-tight surface meshes as inputs to the graphing phase. The novel graphing scheme works with relaxed input requirements and intrin-sically captures vessel radii information, based on deforming geometric graphs constructed within vascular boundaries instead of surface meshes. A mechanism to decimate the initial graph struc-ture at each run is formulated with a convergence criterion to stop the process. A refinement phase is introduced to obtain final vascular models. The developed computational modeling is then ap-plied to simulate potential MRI signatures of microstrokes, combining arterial spin labeling (ASL) and multi-directional diffusion-weighted imaging (DWI). The hypothesis is driven based on recent observations demonstrating a radial reorientation of microvasculature around the micro-infarction locus during recovery in mice. Synthetic capillary beds, randomly- and radially oriented, and op-tical coherence tomography (OCT) angiograms, acquired in the barrel cortex of mice (n=5) before and after inducing targeted photothrombosis, are analyzed. The computational vascular graphs are exploited within a 3D Monte-Carlo simulator to characterize the magnetic resonance (MR) re-sponse, encompassing the effects of magnetic field perturbations caused by deoxyhemoglobin, and the advection and diffusion of the nuclear spins. The proposed graphing pipeline is validated on both synthetic and real angiograms acquired with different imaging modalities. Compared to other efficient and state-of-the-art graphing schemes, the experiments indicate that the proposed scheme produces the lowest geometric and topological error rates on various angiograms. The evaluation also confirms the efficiency of the proposed scheme in providing representative models that capture all anatomical aspects of vascular struc-tures. Next, searching for MRI-based signatures of microstokes, the proposed vascular modeling is exploited to quantify the minimal intravoxel signal loss ratio when applying multiple gradient di-rections, at varying sequence parameters with and without ASL. With ASL, the results demonstrate a significant difference (p<0.05) between the signal-ratios computed at baseline and 3 weeks after photothrombosis. The statistical power further increased (p<0.005) using angiograms captured at week 4. Without ASL, no reliable signal change is found. Higher ratios with improved significance are achieved at low magnetic field strengths (e.g., at 3 Tesla) and shorter readout TE (<16 ms). This study suggests that microstrokes might be characterized through ASL-DWI sequences, and provides necessary insights for posterior experimental validations, and ultimately, future transla-tional trials

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Learning Approach to Delineation of Curvilinear Structures in 2D and 3D Images

    Get PDF
    Detection of curvilinear structures has long been of interest due to its wide range of applications. Large amounts of imaging data could be readily used in many fields, but it is practically not possible to analyze them manually. Hence, the need for automated delineation approaches. In the recent years Computer Vision witnessed a paradigm shift from mathematical modelling to data-driven methods based on Machine Learning. This led to improvements in performance and robustness of the detection algorithms. Nonetheless, most Machine Learning methods are general-purpose and they do not exploit the specificity of the delineation problem. In this thesis, we present learning methods suited for this task and we apply them to various kinds of microscopic and natural images, proving the general applicability of the presented solutions. First, we introduce a topology loss - a new training loss term, which captures higher-level features of curvilinear networks such as smoothness, connectivity and continuity. This is in contrast to most Deep Learning segmentation methods that do not take into account the geometry of the resulting prediction. In order to compute the new loss term, we extract topology features of prediction and ground-truth using a pre-trained network, whose filters are activated by structures at different scales and orientations. We show that this approach yields better results in terms of conventional segmentation metrics and overall topology of the resulting delineation. Although segmentation of curvilinear structures provides useful information, it is not always sufficient. In many cases, such as neuroscience and cartography, it is crucial to estimate the network connectivity. In order to find the graph representation of the structure depicted in the image, we propose an approach for joint segmentation and connection classification. Apart from pixel probabilities, this approach also returns the likelihood of a proposed path being a part of the reconstructed network. We show that segmentation and path classification are closely related tasks and can benefit from the synergy. The aforementioned methods rely on Machine Learning, which requires significant amounts of annotated ground-truth data to train models. The labelling process often requires expertise, it is costly and tiresome. To alleviate this problem, we introduce an Active Learning method that significantly decreases the time spent on annotating images. It queries the annotator only about the most informative examples, in this case the hypothetical paths belonging to the structure of interest. Contrary to conventional Active Learning methods, our approach exploits local consistency of linear paths to pick the ones that stand out from their neighborhood. Our final contribution is a method suited for both Active Learning and proofreading the result, which often requires more time than the automated delineation itself. It investigates edges of the delineation graph and determines the ones that are especially significant for the global reconstruction by perturbing their weights. Our Active Learning and proofreading strategies are combined with a new efficient formulation of an optimal subgraph computation and reduce the annotation effort by up to 80%

    Three-dimensional reconstruction and NURBS-based structured meshing of coronary arteries from the conventional X-ray angiography projection images

    Get PDF
    Despite its two-dimensional nature, X-ray angiography (XRA) has served as the gold standard imaging technique in the interventional cardiology for over five decades. Accordingly, demands for tools that could increase efficiency of the XRA procedure for the quantitative analysis of coronary arteries (CA) are constantly increasing. The aim of this study was to propose a novel procedure for three-dimensional modeling of CA from uncalibrated XRA projections. A comprehensive mathematical model of the image formation was developed and used with a robust genetic algorithm optimizer to determine the calibration parameters across XRA views. The frames correspondences between XRA acquisitions were found using a partial-matching approach. Using the same matching method, an efficient procedure for vessel centerline reconstruction was developed. Finally, the problem of meshing complex CA trees was simplified to independent reconstruction and meshing of connected branches using the proposed nonuniform rational B-spline (NURBS)-based method. Because it enables structured quadrilateral and hexahedral meshing, our method is suitable for the subsequent computational modelling of CA physiology (i.e. coronary blood flow, fractional flow reverse, virtual stenting and plaque progression). Extensive validations using digital, physical, and clinical datasets showed competitive performances and potential for further application on a wider scale

    Segmentation automatique des images de fibres d’ADN pour la quantification du stress réplicatif

    Get PDF
    La réplication de l’ADN est un processus complexe géré par une multitude d’interactions moléculaires permettant une transmission précise de l’information génétique de la cellule mère vers les cellules filles. Parmi les facteurs pouvant porter atteinte à la fidélité de ce processus, on trouve le Stress Réplicatif. Il s’agit de l’ensemble des phénomènes entraînant le ralentissement voire l’arrêt anormal des fourches de réplication. S’il n’est pas maîtrisé, le stress réplicatif peut causer des ruptures du double brin d’ADN ce qui peut avoir des conséquences graves sur la stabilité du génome, la survie de la cellule et conduire au développement de cancers, de maladies neurodégénératives ou d’autres troubles du développement. Il existe plusieurs techniques d’imagerie de l’ADN par fluorescence permettant l’évaluation de la progression des fourches de réplication au niveau moléculaire. Ces techniques reposent sur l’incorporation d’analogues de nucléotides tels que chloro- (CldU), iodo- (IdU), ou bromo-deoxyuridine (BrdU) dans le double brin en cours de réplication. L’expérience la plus classique repose sur l’incorporation successive de deux types d’analogues de nucléotides (IdU et CldU) dans le milieu cellulaire. Une fois ces nucléotides exogènes intégrés dans le double brin de l’ADN répliqué, on lyse les cellules et on répartit l’ADN sur une lame de microscope. Les brins contenant les nucléotides exogènes peuvent être imagés par immunofluorescence. L’image obtenue est constituée de deux couleurs correspondant à chacun des deux types d’analogues de nucléotides. La mesure des longueurs de chaque section fluorescente permet la quantification de la vitesse de progression des fourches de réplication et donc l’évaluation des effets du stress réplicatif. La mesure de la longueur des fibres fluorescentes d’ADN est généralement réalisée manuellement. Cette opération, en plus d’être longue et fastidieuse, peut être sujette à des variations inter- et intra- opérateurs provenant principalement de déférences dans le choix des fibres. La détection des fibres d’ADN est difficile car ces dernières sont souvent fragmentées en plusieurs morceaux espacés et peuvent s’enchevêtrer en agrégats. De plus, les fibres sont parfois difficile à distinguer du bruit en arrière-plan causé par les liaisons non-spécifiques des anticorps fluorescents. Malgré la profusion des algorithmes de segmentation de structures curvilignes (vaisseaux sanguins, réseaux neuronaux, routes, fissures sur béton...), très peu de travaux sont dédiés au traitement des images de fibres d’ADN. Nous avons mis au point un algorithme intitulé ADFA (Automated DNA Fiber Analysis) permettant la segmentation automatique des fibres d’ADN ainsi que la mesure de leur longueur respective. Cet algorithme se divise en trois parties : (i) Une extraction des objets de l’image par analyse des contours. Notre méthode de segmentation des contours se basera sur des techniques classiques d’analyse du gradient de l’image (Marr-Hildreth et de Canny). (ii) Un prolongement des objets adjacents afin de fusionner les fibres fragmentées. Nous avons développé une méthode de suivi (tracking) basée sur l’orientation et la continuité des objets adjacents. (iii) Une détermination du type d’analogue de nucléotide par comparaison des couleurs. Pour ce faire, nous analyserons les deux canaux (vert et rouge) de l’image le long de chaque fibre. Notre algorithme a été testé sur un grand nombre d’images de qualité variable et acquises à partir de différents contextes de stress réplicatif. La comparaison entre ADFA et plusieurs opérateurs humains montre une forte adéquation entre les deux approches à la fois à l’échelle de chaque fibre et à l’échelle plus globale de l’image. La comparaison d’échantillons soumis ou non soumis à un stress réplicatif a aussi permis de valider les performances de notre algorithme. Enfin, nous avons étudié l’impact du temps d’incubation du second analogue de nucléotide sur les résultats de l’algorithme. Notre algorithme est particulièrement efficace sur des images contenant des fibres d’ADN relativement courtes et peu fractionnées. En revanche, notre méthode de suivi montre des limites lorsqu’il s’agit de fusionner correctement de longues fibres fortement fragmentées et superposées à d’autres brins. Afin d’optimiser les performances d’ADFA, nous recommandons des temps d’incubation courts (20 à 30 minutes) pour chaque analogue de nucléotide dans le but d’obtenir des fibres courtes. Nous recommandons aussi de favoriser la dilution des brins sur la lame de microscope afin d’éviter la formation d’agrégats de fibres difficiles à distinguer. ADFA est disponible en libre accès et a pour vocation de servir de référence pour la mesure des brins d’ADN afin de pallier les problèmes de variabilités inter-opérateurs.----------ABSTRACTDNA replication is tightly regulated by a great number of molecular interactions that ensure accurate transmission of genetic information to daughter cells. Replicative Stress refers to all the processes undermining the fidelity of DNA replication by slowing down or stalling DNA replication forks. Indeed, stalled replication forks may “collapse” into highly-genotoxic double strand breaks (DSB) which engender chromosomal rearrangements and genomic instability. Thus, replicative stress can constitute a critical determinant in both cancer development and treatment. Replicative stress is also implicated in the molecular pathogenesis of aging and neurodegenerative disease, as well as developmental disorders. Several fluorescence imaging techniques enable the evaluation of replication forks progression at the level of individual DNA molecules. Those techniques rely on the incorporation of exogene nucleotide analogs in nascent DNA at replication forks in living cells. In a typical experiment, sequential incorporation of two nucleotide analogs, e.g., IdU and CldU, is performed. Following cell lysis and spreading of DNA on microscopy slides, DNA molecules are then imaged by immunofluorescence. The obtained image is made up of two colors corresponding to each one of the two nucleotide analogs. Measurement of the respective lengths of these labeled stretches of DNA permits quantification of replication fork progression. Evaluation of DNA fiber length is generally performed manually. This procedure is laborious and subject to inter- and intra-user variability stemming in part from unintended bias in the choice of fibers to be measured. DNA fiber extraction is difficult because strands are often fragmented in lots of subparts and can be tangled in clusters. Moreover, the extraction of fibers can be difficult when the background is noised by non specific staining. Despite the large number of segmentation algorithms dedicated to curvilinear structures (blood vessels, neural networks, roads, concrete tracks...), few studies address the treatment of DNA fiber images. We developed an algorithm called ADFA (Automated DNA Fiber Analysis) which automatically segments DNA fibers and measures their respective length. Our approach can be divided into three parts: 1. Object extraction by a robust contour detection. Our contour segmentation method relies on two classical gradient analyses (Marr and Hildreth, 1980; Canny, 1986) 2. Fusion of adjacent fragmented fibers by analysing their continuity. We developped a tracking approach based on the orientation and the continuity of adjacent fibers. 3. Detection of the nucleotide analog label (IdU or CldU). To do so, we analyse the color profile on both channels (green and red) along each fiber. ADFA was tested on a database of different images of varying quality, signal to noise ratio, or fiber length which were acquired from two different microscopes. The comparison between ADFA and manual segmentations shows a high correlation both at the scale of the fiber and at the scale of the image. Moreover, we validate our algorithm by comparing samples submitted to replicative stress and controls. Finally, we studied the impact of the incubation time of the second nucleotide analog pulse. The performances of our algorithm are optimised for images containing relatively short and not fragmented DNA fibers. Our tracking methods may be limited when connecting highly split fibers superimposed to other strands. Therefore, we recommend to reduce the incubation time of each nucleotide analog to about 20-30 minutes in order to obtain short fibers. We also recommend to foster the dilution of fibers on the slide to reduce clustering of fluorescent DNA molecules. ADFA is freely available as an open-source software. It might be used as a reference tool to solve inter-intra user variability
    corecore