12 research outputs found

    Persistent homology based characterization of the breast cancer immune microenvironment: a feasibility study

    Get PDF
    International audiencePersistent homology is a powerful tool in topological data analysis. The main output, persistence diagrams, encode the geometry and topology of given datasets. We present a novel application of persistent homology to characterize the biological environment surrounding breast cancers, known as the tumor microenvironment. Specifically, we will characterize the spatial arrangement of immune and malignant epithelial (tumor) cells within the breast cancer immune microenvironment. Quantitative and robust characterizations are built by computing persistence diagrams from quantitative multiplex immunofluorescence, which is a technology which allows us to obtain spatial coordinates and protein intensities on individual cells. The resulting persistence diagrams are evaluated as characteristic biomarkers predictive of cancer subtype and prognostic of overall survival. For a cohort of approximately 700 breast cancer patients with median 8.5-year clinical follow-up, we show that these persistence diagrams outperform and complement the usual descriptors which capture spatial relationships with nearest neighbor analysis. Our results thus suggest new methods which can be used to build topology-based biomarkers which are characteristic and predictive of cancer subtype and response to therapy as well as prognostic of overall survival

    Combining Geometric and Topological Information for Boundary Estimation

    Full text link
    A fundamental problem in computer vision is boundary estimation, where the goal is to delineate the boundary of objects in an image. In this paper, we propose a method which jointly incorporates geometric and topological information within an image to simultaneously estimate boundaries for objects within images with more complex topologies. We use a topological clustering-based method to assist initialization of the Bayesian active contour model. This combines pixel clustering, boundary smoothness, and potential prior shape information to produce an estimated object boundary. Active contour methods are knownto be extremely sensitive to algorithm initialization, relying on the user to provide a reasonable starting curve to the algorithm. In the presence of images featuring objects with complex topological structures, such as objects with holes or multiple objects, the user must initialize separate curves for each boundary of interest. Our proposed topologically-guided method can provide an interpretable, smart initialization in these settings, freeing up the user from potential pitfalls associated with objects of complex topological structure. We provide a detailed simulation study comparing our initialization to boundary estimates obtained from standard segmentation algorithms. The method is demonstrated on artificial image datasets from computer vision, as well as real-world applications to skin lesion and neural cellular images, for which multiple topological features can be identified.Comment: 38 pages with appendices, 15 figure

    Segmenting the Papillary Muscles and the Trabeculae from High Resolution Cardiac CT through Restoration of Topological Handles

    No full text
    Abstract. We introduce a novel algorithm for segmenting the high resolution CT images of the left ventricle (LV), particularly the papillary muscles and the trabeculae. High quality segmentations of these structures are necessary in order to better understand the anatomical function and geometrical properties of LV. These fine structures, however, are extremely challenging to capture due to their delicate and complex nature in both geometry and topology. Our algorithm computes the potential missing topological structures of a given initial segmentation. Using techniques from computational topology, e.g. persistent homology, our algorithm find topological handles which are likely to be the true signal. To further increase accuracy, these proposals are measured by the saliency and confidence from a trained classifier. Handles with high scores are restored in the final segmentation, leading to high quality segmentation results of the complex structures.

    Atlas Construction for Measuring the Variability of Complex Anatomical Structures

    Get PDF
    RÉSUMÉ La recherche sur l'anatomie humaine, en particulier sur le cœur et le cerveau, est d'un intérêt particulier car leurs anomalies entraînent des pathologies qui sont parmi les principales causes de décès dans le monde et engendrent des coûts substantiels. Heureusement, les progrès en imagerie médicale permettent des diagnostics et des traitements autrefois impossibles. En contrepartie, la quantité phénoménale de données produites par ces technologies nécessite le développement d'outils efficaces pour leur traitement. L'objectif de cette thèse est de proposer un ensemble d'outils permettant de normaliser des mesures prélevées sur différents individus, essentiels à l'étude des caractéristiques de structures anatomiques complexes. La normalisation de mesures consiste à rassembler une collection d'images dans une référence commune, aussi appelée construction d'atlas numériques, afin de combiner des mesures provenant de différents patients. Le processus de construction inclut deux étapes principales; la segmentation d'images pour trouver des régions d'intérêts et le recalage d'images afin de déterminer les correspondances entres régions d'intérêts. Les méthodes actuelles de constructions d'atlas peuvent nécessiter des interventions manuelles, souvent fastidieuses, variables, et sont en outre limitées par leurs mécanismes internes. Principalement, le recalage d'images dépend d'une déformation incrémentales d'images sujettes a des minimums locaux. Le recalage n'est ainsi pas optimal lors de grandes déformations et ces limitations requièrent la nécessite de proposer de nouvelles approches pour la construction d'atlas. Les questions de recherche de cette thèse se concentrent donc sur l'automatisation des méthodes actuelles ainsi que sur la capture de déformations complexes de structures anatomiques, en particulier sur le cœur et le cerveau. La méthodologie adoptée a conduit à trois objectifs de recherche spécifiques. Le premier prévoit un nouveau cadre de construction automatise d'atlas afin de créer le premier atlas humain de l'architecture de fibres cardiaques. Le deuxième vise à explorer une nouvelle approche basée sur la correspondance spectrale, nommée FOCUSR, afin de capturer une grande variabilité de formes sur des maillages. Le troisième aboutit finalement à développer une approche fondamentalement différente pour le recalage d'images à fortes déformations, nommée les démons spectraux. Le premier objectif vise plus particulièrement à construire un atlas statistique de l'architecture des fibres cardiaques a partir de 10 cœurs ex vivo humains. Le système développé a mené à deux contributions techniques et une médicale, soit l'amélioration de la segmentation de structures cardiaques et l'automatisation du calcul de forme moyenne, ainsi que notamment la première étude chez l'homme de la variabilité de l'architecture des fibres cardiaques. Pour résumer les principales conclusions, les fibres du cœur humain moyen varient de +- 12 degrés, l'angle d'helix s'étend entre -41 degrés (+- 26 degrés) sur l'épicarde à +66 degrés (+- 15 degrés) sur l'endocarde, tandis que l'angle transverse varie entre +9 degrés (+- 12 degrés) et +34 degrés (+- 29 degrés) à travers le myocarde. Ces résultats sont importants car ces fibres jouent un rôle clef dans diverses fonctions mécaniques et électrophysiologiques du cœur. Le deuxième objectif cherche à capturer une grande variabilité de formes entre structures anatomiques complexes, plus particulièrement entre cortex cérébraux à cause de l'extrême variabilité de ces surfaces et de leur intérêt pour l'étude de fonctions cognitives. La nouvelle méthode de correspondance surfacique, nommée FOCUSR, exploite des représentations spectrales car l'appariement devient plus facile et rapide dans le domaine spectral plutôt que dans l'espace Euclidien classique. Dans sa forme la plus simple, FOCUSR améliore les méthodes spectrales actuelles par un recalage non rigide des représentations spectrales, toutefois, son plein potentiel est atteint en exploitant des données supplémentaires lors de la mise en correspondance. Par exemple, les résultats ont montré que la profondeur des sillons et de la courbure du cortex cérébral améliore significativement la correspondance de surfaces de cerveaux. Enfin, le troisième objectif vise à améliorer le recalage d'images d'organes ayant des fortes variabilités entre individus ou subis de fortes déformations, telles que celles créées par le mouvement cardiaque. La méthodologie amenée par la correspondance spectrale permet d'améliorer les approches conventionnelles de recalage d'images. En effet, les représentations spectrales, capturant des similitudes géométriques globales entre différentes formes, permettent de surmonter les limitations actuelles des méthodes de recalage qui restent guidées par des forces locales. Le nouvel algorithme, nommé démons spectraux, peut ainsi supporter de très grandes déformations locales et complexes entre images, et peut être tout autant adapté a d'autres approches, telle que dans un cadre de recalage conjoint d'images. Il en résulte un cadre complet de construction d'atlas, nommé démons spectraux multijoints, où la forme moyenne est calculée directement lors du processus de recalage plutôt qu'avec une approche séquentielle de recalage et de moyennage. La réalisation de ces trois objectifs spécifiques a permis des avancées dans l'état de l'art au niveau des méthodes de correspondance spectrales et de construction d'atlas, en permettant l'utilisation d'organes présentant une forte variabilité de formes. Dans l'ensemble, les différentes stratégies fournissent de nouvelles contributions sur la façon de trouver et d'exploiter des descripteurs globaux d'images et de surfaces. D'un point de vue global, le développement des objectifs spécifiques établit un lien entre : a) la première série d'outils, mettant en évidence les défis à recaler des images à fortes déformations, b) la deuxième série d'outils, servant à capturer de fortes déformations entre surfaces mais qui ne reste pas directement applicable a des images, et c) la troisième série d'outils, faisant un retour sur le traitement d'images en permettant la construction d'atlas a partir d'images ayant subies de fortes déformations. Il y a cependant plusieurs limitations générales qui méritent d'être investiguées, par exemple, les données partielles (tronquées ou occluses) ne sont pas actuellement prises en charge les nouveaux outils, ou encore, les stratégies algorithmiques utilisées laissent toujours place à l'amélioration. Cette thèse donne de nouvelles perspectives dans les domaines de l'imagerie cardiaque et de la neuroimagerie, toutefois, les nouveaux outils développés sont assez génériques pour être appliqués a tout recalage d'images ou de surfaces. Les recommandations portent sur des recherches supplémentaires qui établissent des liens avec la segmentation à base de graphes, pouvant conduire à un cadre complet de construction d'atlas où la segmentation, le recalage, et le moyennage de formes seraient tous interdépendants. Il est également recommandé de poursuivre la recherche sur la construction de meilleurs modèles électromécaniques cardiaques à partir des résultats de cette thèse. En somme, les nouveaux outils offrent de nouvelles bases de recherche et développement pour la normalisation de formes, ce qui peut potentiellement avoir un impact sur le diagnostic, ainsi que la planification et la pratique d'interventions médicales.----------ABSTRACT Research on human anatomy, in particular on the heart and the brain, is a primary concern for society since their related diseases are among top killers across the globe and have exploding associated costs. Fortunately, recent advances in medical imaging offer new possibilities for diagnostics and treatments. On the other hand, the growth in data produced by these relatively new technologies necessitates the development of efficient tools for processing data. The focus of this thesis is to provide a set of tools for normalizing measurements across individuals in order to study complex anatomical characteristics. The normalization of measurements consists of bringing a collection of images into a common reference, also known as atlas construction, in order to combine measurements made on different individuals. The process of constructing an atlas involves the topics of segmentation, which finds regions of interest in the data (e.g., an organ, a structure), and registration, which finds correspondences between regions of interest. Current frameworks may require tedious and hardly reproducible user interactions, and are additionally limited by their computational schemes, which rely on slow iterative deformations of images, prone to local minima. Image registration is, therefore, not optimal with large deformations. Such limitations indicate the need to research new approaches for atlas construction. The research questions are consequently addressing the problems of automating current frameworks and capturing global and complex deformations between anatomical structures, in particular between human hearts and brains. More precisely, the methodology adopted in the thesis led to three specific research objectives. Briefly, the first step aims at developing a new automated framework for atlas construction in order to build the first human atlas of the cardiac fiber architecture. The second step intends to explore a new approach based on spectral correspondence, named FOCUSR, in order to precisely capture large shape variability. The third step leads, finally, to a fundamentally new approach for image registration with large deformations, named the Spectral Demons algorithm. The first objective aims more specifically at constructing a statistical atlas of the cardiac fiber architecture from a unique human dataset of 10 ex vivo hearts. The developed framework made two technical, and one medical, contributions, that are the improvement of the segmentation of cardiac structures, the automation of the shape averaging process, and more importantly, the first human study on the variability of the cardiac fiber architecture. To summarize the main finding, the fiber orientations in human hearts has been found to vary with about +- 12 degrees, the range of the helix angle spans from -41 degrees (+- 26 degrees) on the epicardium to +66 degrees (+- 15 degrees) on the endocardium, while, the range of the transverse angle spans from +9 degrees (+- 12 degrees) to +34 degrees (+- 29 degrees) across the myocardial wall. These findings are significant in cardiology since the fiber architecture plays a key role in cardiac mechanical functions and in electrophysiology. The second objective intends to capture large shape variability between complex anatomical structures, in particular between cerebral cortices due to their highly convoluted surfaces and their high anatomical and functional variability across individuals. The new method for surface correspondence, named FOCUSR, exploits spectral representations since matching is easier in the spectral domain rather than in the conventional Euclidean space. In its simplest form, FOCUSR improves current spectral approaches by refining spectral representations with a nonrigid alignment; however, its full power is demonstrated when using additional features during matching. For instance, the results showed that sulcal depth and cortical curvature improve significantly the accuracy of cortical surface matching. Finally, the third objective is to improve image registration for organs with a high inter-subject variability or undergoing very large deformations, such as the heart. The new approach brought by the spectral matching technique allows the improvement of conventional image registration methods. Indeed, spectral representations, which capture global geometric similarities and large deformations between different shapes, may be used to overcome a major limitation of current registration methods, which are in fact guided by local forces and restrained to small deformations. The new algorithm, named Spectral Demons, can capture very large and complex deformations between images, and can additionally be adapted to other approaches, such as in a groupwise configuration. This results in a complete framework for atlas construction, named Groupwise Spectral Demons, where the average shape is computed during the registration process rather than in sequential steps. The achievements of these three specific objectives permitted advances in the state-of-the-art of spectral matching methods and of atlas construction, enabling the registration of organs with significant shape variability. Overall, the investigation of these different strategies provides new contributions on how to find and exploit global descriptions of images and surfaces. From a global perspective, these objectives establish a link between: a) the first set of tools, that highlights the challenges in registering images with very large deformations, b) the second set of tools, that captures very large deformations between surfaces but are not applicable to images, and c) the third set of tools, that comes back on processing images and allows a natural construction of atlases from images with very large deformations. There are, however, several general remaining limitations, for instance, partial data (truncated or occluded) is currently not supported by the new tools, or also, the strategy for computing and using spectral representations still leaves room for improvement. This thesis gives new perspectives in cardiac and neuroimaging, yet at the same time, the new tools remain general enough for virtually any application that uses surface or image registration. It is recommended to research additional links with graph-based segmentation methods, which may lead to a complete framework for atlas construction where segmentation, registration and shape averaging are all interlinked. It is also recommended to pursue research on building better cardiac electromechanical models from the findings of this thesis. Nevertheless, the new tools provide new grounds for research and application of shape normalization, which may potentially impact diagnostic, as well as planning and performance of medical interventions

    Doctor of Philosophy

    Get PDF
    dissertationKernel smoothing provides a simple way of finding structures in data sets without the imposition of a parametric model, for example, nonparametric regression and density estimates. However, in many data-intensive applications, the data set could be large. Thus, evaluating a kernel density estimate or kernel regression over the data set directly can be prohibitively expensive in big data. This dissertation is working on how to efficiently find a smaller data set that can approximate the original data set with a theoretical guarantee in the kernel smoothing setting and how to extend it to more general smooth range spaces. For kernel density estimates, we propose randomized and deterministic algorithms with quality guarantees that are orders of magnitude more efficient than previous algorithms, which do not require knowledge of the kernel or its bandwidth parameter and are easily parallelizable. Our algorithms are applicable to any large-scale data processing framework. We then further investigate how to measure the error between two kernel density estimates, which is usually measured either in L1 or L2 error. In this dissertation, we investigate the challenges in using a stronger error, L ∞ (or worst case) error. We present efficient solutions for how to estimate the L∞ error and how to choose the bandwidth parameter for a kernel density estimate built on a subsample of a large data set. We next extend smoothed versions of geometric range spaces from kernel range spaces to more general types of ranges, so that an element of the ground set can be contained in a range with a non-binary value in [0,1]. We investigate the approximation of these range spaces through ϵ-nets and ϵ-samples. Finally, we study coresets algorithms for kernel regression. The size of the coresets are independent of the size of the data set, rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, demonstrate that they can be constructed extremely efficiently, and allow for great computational gains

    The radiological investigation of musculoskeletal tumours : chairperson's introduction

    No full text

    Infective/inflammatory disorders

    Get PDF
    corecore