8 research outputs found

    Interactive Segmentation, Uncertainty and Learning

    Get PDF
    Interactive segmentation is an important paradigm in image processing. To minimize the number of user interactions (“seeds”) required until the result is correct, the computer should actively query the human for input at the most critical locations, in analogy to active learning. These locations are found by means of suitable uncertainty measures. I propose various such measures for the watershed cut algorithm along with a theoretical analysis of some of their properties in Chapter 2. Furthermore, real-world images often admit many different segmentations that have nearly the same quality according to the underlying energy function. The diversity of these solutions may be a powerful uncertainty indicator. In Chapter 3 the crucial prerequisite in the context of seeded segmentation with minimum spanning trees (i.e. edge-weighted watersheds) is provided. Specifically, it is shown how to efficiently enumerate the k smallest spanning trees that result in different segmentations. Furthermore, I propose a scheme that allows to partition an image into a previously unknown number of segments, using only minimal supervision in terms of a few must-link and cannot-link annotations. The algorithm presented in Chapter 4 makes no use of regional data terms, learning instead what constitutes a likely boundary between segments. Since boundaries are only implicitly specified through cannot-link constraints, this is a hard and nonconvex latent variable problem. This problem is adressed in a greedy fashion using a randomized decision tree on features associated with interpixel edges. I propose to use a structured purity criterion during tree construction and also show how a backtracking strategy can be used to prevent the greedy search from ending up in poor local optima. The problem of learning a boundary classifier from sparse user annotations is also considered in Chapter 5. Here the problem is mapped to a multiple instance learning task where positive bags consist of paths on a graph that cross a segmentation boundary and negative bags consist of paths inside a user scribble. Multiple instance learning is also the topic of Chapter 6. Here I propose a multiple instance learning algorithm based on randomized decision trees. Experiments on the typical benchmark data sets show that this model’s prediction performance is clearly better than earlier tree based methods, and is only slightly below that of more expensive methods. Finally, a flow graph based computation library is discussed in Chapter 7. The presented library is used as the backend in a interactive learning and segmentation toolkit and supports a rich set of notification mechanisms for the interaction with a graphical user interface

    Knee cartilage segmentation using multi purpose interactive approach

    Get PDF
    Interactive model incorporates expert interpretation and automated segmentation. However, cartilage has complicated structure, indistinctive tissue contrast in magnetic resonance image of knee hardens image review and existing interactive methods are sensitive to various technical problems such as bi-label segmentation problem, shortcut problem and sensitive to image noise. Moreover, redundancy issue caused by non-cartilage labelling has never been tackled. Therefore, Bi-Bezier Curve Contrast Enhancement is developed to improve visual quality of magnetic resonance image by considering brightness preservation and contrast enhancement control. Then, Multipurpose Interactive Tool is developed to handle users’ interaction through Label Insertion Point approach. Approximate NonCartilage Labelling system is developed to generate computerized non-cartilage label, while preserves cartilage for expert labelling. Both computerized and interactive labels initialize Random Walks based segmentation model. To evaluate contrast enhancement techniques, Measure of Enhancement (EME), Absolute Mean Brightness Error (AMBE) and Feature Similarity Index (FSIM) are used. The results suggest that Bi-Bezier Curve Contrast Enhancement outperforms existing methods in terms of contrast enhancement control (EME = 41.44±1.06), brightness distortion (AMBE = 14.02±1.29) and image quality (FSIM = 0.92±0.02). Besides, implementation of Approximate Non-Cartilage Labelling model has demonstrated significant efficiency improvement in segmenting normal cartilage (61s±8s, P = 3.52 x 10-5) and diseased cartilage (56s±16s, P = 1.4 x 10-4). Finally, the proposed labelling model has high Dice values (Normal: 0.94±0.022, P = 1.03 x 10-9; Abnormal: 0.92±0.051, P = 4.94 x 10-6) and is found to be beneficial to interactive model (+0.12)

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    Superpixel lattices

    Get PDF
    Superpixels are small image segments that are used in popular approaches to object detection and recognition problems. The superpixel approach is motivated by the observation that pixels within small image segments can usually be attributed the same label. This allows a superpixel representation to produce discriminative features based on data dependent regions of support. The reduced set of image primitives produced by superpixels can also be exploited to improve the efficiency of subsequent processing steps. However, it is common for the superpixel representation to have a different graph structure from the original pixel representation of the image. The first part of the thesis argues that a number of desirable properties of the pixel representation should be maintained by superpixels and that this is not possible with existing methods. We propose a new representation, the superpixel lattice, and demonstrate its advantages. The second part of the thesis investigates incorporating a priori information into superpixel segmentations. We learn a probabilistic model that describes the spatial density of object boundaries in the image. We demonstrate our approach using road scene data and show that our algorithm successfully exploits the spatial distribution of object boundaries to improve the superpixel segmentation. The third part of the thesis presents a globally optimal solution to our superpixel lattice problem in either the horizontal or vertical direction. The solution makes use of a Markov Random Field formulation where the label field is guaranteed to be a set of ordered layers. We introduce an iterative algorithm that uses this framework to learn colour distributions across an image in an unsupervised manner. We conclude that our approach achieves comparable or better performance than competing methods and that it confers several additional advantages

    Atlas Construction for Measuring the Variability of Complex Anatomical Structures

    Get PDF
    RÉSUMÉ La recherche sur l'anatomie humaine, en particulier sur le cœur et le cerveau, est d'un intérêt particulier car leurs anomalies entraînent des pathologies qui sont parmi les principales causes de décès dans le monde et engendrent des coûts substantiels. Heureusement, les progrès en imagerie médicale permettent des diagnostics et des traitements autrefois impossibles. En contrepartie, la quantité phénoménale de données produites par ces technologies nécessite le développement d'outils efficaces pour leur traitement. L'objectif de cette thèse est de proposer un ensemble d'outils permettant de normaliser des mesures prélevées sur différents individus, essentiels à l'étude des caractéristiques de structures anatomiques complexes. La normalisation de mesures consiste à rassembler une collection d'images dans une référence commune, aussi appelée construction d'atlas numériques, afin de combiner des mesures provenant de différents patients. Le processus de construction inclut deux étapes principales; la segmentation d'images pour trouver des régions d'intérêts et le recalage d'images afin de déterminer les correspondances entres régions d'intérêts. Les méthodes actuelles de constructions d'atlas peuvent nécessiter des interventions manuelles, souvent fastidieuses, variables, et sont en outre limitées par leurs mécanismes internes. Principalement, le recalage d'images dépend d'une déformation incrémentales d'images sujettes a des minimums locaux. Le recalage n'est ainsi pas optimal lors de grandes déformations et ces limitations requièrent la nécessite de proposer de nouvelles approches pour la construction d'atlas. Les questions de recherche de cette thèse se concentrent donc sur l'automatisation des méthodes actuelles ainsi que sur la capture de déformations complexes de structures anatomiques, en particulier sur le cœur et le cerveau. La méthodologie adoptée a conduit à trois objectifs de recherche spécifiques. Le premier prévoit un nouveau cadre de construction automatise d'atlas afin de créer le premier atlas humain de l'architecture de fibres cardiaques. Le deuxième vise à explorer une nouvelle approche basée sur la correspondance spectrale, nommée FOCUSR, afin de capturer une grande variabilité de formes sur des maillages. Le troisième aboutit finalement à développer une approche fondamentalement différente pour le recalage d'images à fortes déformations, nommée les démons spectraux. Le premier objectif vise plus particulièrement à construire un atlas statistique de l'architecture des fibres cardiaques a partir de 10 cœurs ex vivo humains. Le système développé a mené à deux contributions techniques et une médicale, soit l'amélioration de la segmentation de structures cardiaques et l'automatisation du calcul de forme moyenne, ainsi que notamment la première étude chez l'homme de la variabilité de l'architecture des fibres cardiaques. Pour résumer les principales conclusions, les fibres du cœur humain moyen varient de +- 12 degrés, l'angle d'helix s'étend entre -41 degrés (+- 26 degrés) sur l'épicarde à +66 degrés (+- 15 degrés) sur l'endocarde, tandis que l'angle transverse varie entre +9 degrés (+- 12 degrés) et +34 degrés (+- 29 degrés) à travers le myocarde. Ces résultats sont importants car ces fibres jouent un rôle clef dans diverses fonctions mécaniques et électrophysiologiques du cœur. Le deuxième objectif cherche à capturer une grande variabilité de formes entre structures anatomiques complexes, plus particulièrement entre cortex cérébraux à cause de l'extrême variabilité de ces surfaces et de leur intérêt pour l'étude de fonctions cognitives. La nouvelle méthode de correspondance surfacique, nommée FOCUSR, exploite des représentations spectrales car l'appariement devient plus facile et rapide dans le domaine spectral plutôt que dans l'espace Euclidien classique. Dans sa forme la plus simple, FOCUSR améliore les méthodes spectrales actuelles par un recalage non rigide des représentations spectrales, toutefois, son plein potentiel est atteint en exploitant des données supplémentaires lors de la mise en correspondance. Par exemple, les résultats ont montré que la profondeur des sillons et de la courbure du cortex cérébral améliore significativement la correspondance de surfaces de cerveaux. Enfin, le troisième objectif vise à améliorer le recalage d'images d'organes ayant des fortes variabilités entre individus ou subis de fortes déformations, telles que celles créées par le mouvement cardiaque. La méthodologie amenée par la correspondance spectrale permet d'améliorer les approches conventionnelles de recalage d'images. En effet, les représentations spectrales, capturant des similitudes géométriques globales entre différentes formes, permettent de surmonter les limitations actuelles des méthodes de recalage qui restent guidées par des forces locales. Le nouvel algorithme, nommé démons spectraux, peut ainsi supporter de très grandes déformations locales et complexes entre images, et peut être tout autant adapté a d'autres approches, telle que dans un cadre de recalage conjoint d'images. Il en résulte un cadre complet de construction d'atlas, nommé démons spectraux multijoints, où la forme moyenne est calculée directement lors du processus de recalage plutôt qu'avec une approche séquentielle de recalage et de moyennage. La réalisation de ces trois objectifs spécifiques a permis des avancées dans l'état de l'art au niveau des méthodes de correspondance spectrales et de construction d'atlas, en permettant l'utilisation d'organes présentant une forte variabilité de formes. Dans l'ensemble, les différentes stratégies fournissent de nouvelles contributions sur la façon de trouver et d'exploiter des descripteurs globaux d'images et de surfaces. D'un point de vue global, le développement des objectifs spécifiques établit un lien entre : a) la première série d'outils, mettant en évidence les défis à recaler des images à fortes déformations, b) la deuxième série d'outils, servant à capturer de fortes déformations entre surfaces mais qui ne reste pas directement applicable a des images, et c) la troisième série d'outils, faisant un retour sur le traitement d'images en permettant la construction d'atlas a partir d'images ayant subies de fortes déformations. Il y a cependant plusieurs limitations générales qui méritent d'être investiguées, par exemple, les données partielles (tronquées ou occluses) ne sont pas actuellement prises en charge les nouveaux outils, ou encore, les stratégies algorithmiques utilisées laissent toujours place à l'amélioration. Cette thèse donne de nouvelles perspectives dans les domaines de l'imagerie cardiaque et de la neuroimagerie, toutefois, les nouveaux outils développés sont assez génériques pour être appliqués a tout recalage d'images ou de surfaces. Les recommandations portent sur des recherches supplémentaires qui établissent des liens avec la segmentation à base de graphes, pouvant conduire à un cadre complet de construction d'atlas où la segmentation, le recalage, et le moyennage de formes seraient tous interdépendants. Il est également recommandé de poursuivre la recherche sur la construction de meilleurs modèles électromécaniques cardiaques à partir des résultats de cette thèse. En somme, les nouveaux outils offrent de nouvelles bases de recherche et développement pour la normalisation de formes, ce qui peut potentiellement avoir un impact sur le diagnostic, ainsi que la planification et la pratique d'interventions médicales.----------ABSTRACT Research on human anatomy, in particular on the heart and the brain, is a primary concern for society since their related diseases are among top killers across the globe and have exploding associated costs. Fortunately, recent advances in medical imaging offer new possibilities for diagnostics and treatments. On the other hand, the growth in data produced by these relatively new technologies necessitates the development of efficient tools for processing data. The focus of this thesis is to provide a set of tools for normalizing measurements across individuals in order to study complex anatomical characteristics. The normalization of measurements consists of bringing a collection of images into a common reference, also known as atlas construction, in order to combine measurements made on different individuals. The process of constructing an atlas involves the topics of segmentation, which finds regions of interest in the data (e.g., an organ, a structure), and registration, which finds correspondences between regions of interest. Current frameworks may require tedious and hardly reproducible user interactions, and are additionally limited by their computational schemes, which rely on slow iterative deformations of images, prone to local minima. Image registration is, therefore, not optimal with large deformations. Such limitations indicate the need to research new approaches for atlas construction. The research questions are consequently addressing the problems of automating current frameworks and capturing global and complex deformations between anatomical structures, in particular between human hearts and brains. More precisely, the methodology adopted in the thesis led to three specific research objectives. Briefly, the first step aims at developing a new automated framework for atlas construction in order to build the first human atlas of the cardiac fiber architecture. The second step intends to explore a new approach based on spectral correspondence, named FOCUSR, in order to precisely capture large shape variability. The third step leads, finally, to a fundamentally new approach for image registration with large deformations, named the Spectral Demons algorithm. The first objective aims more specifically at constructing a statistical atlas of the cardiac fiber architecture from a unique human dataset of 10 ex vivo hearts. The developed framework made two technical, and one medical, contributions, that are the improvement of the segmentation of cardiac structures, the automation of the shape averaging process, and more importantly, the first human study on the variability of the cardiac fiber architecture. To summarize the main finding, the fiber orientations in human hearts has been found to vary with about +- 12 degrees, the range of the helix angle spans from -41 degrees (+- 26 degrees) on the epicardium to +66 degrees (+- 15 degrees) on the endocardium, while, the range of the transverse angle spans from +9 degrees (+- 12 degrees) to +34 degrees (+- 29 degrees) across the myocardial wall. These findings are significant in cardiology since the fiber architecture plays a key role in cardiac mechanical functions and in electrophysiology. The second objective intends to capture large shape variability between complex anatomical structures, in particular between cerebral cortices due to their highly convoluted surfaces and their high anatomical and functional variability across individuals. The new method for surface correspondence, named FOCUSR, exploits spectral representations since matching is easier in the spectral domain rather than in the conventional Euclidean space. In its simplest form, FOCUSR improves current spectral approaches by refining spectral representations with a nonrigid alignment; however, its full power is demonstrated when using additional features during matching. For instance, the results showed that sulcal depth and cortical curvature improve significantly the accuracy of cortical surface matching. Finally, the third objective is to improve image registration for organs with a high inter-subject variability or undergoing very large deformations, such as the heart. The new approach brought by the spectral matching technique allows the improvement of conventional image registration methods. Indeed, spectral representations, which capture global geometric similarities and large deformations between different shapes, may be used to overcome a major limitation of current registration methods, which are in fact guided by local forces and restrained to small deformations. The new algorithm, named Spectral Demons, can capture very large and complex deformations between images, and can additionally be adapted to other approaches, such as in a groupwise configuration. This results in a complete framework for atlas construction, named Groupwise Spectral Demons, where the average shape is computed during the registration process rather than in sequential steps. The achievements of these three specific objectives permitted advances in the state-of-the-art of spectral matching methods and of atlas construction, enabling the registration of organs with significant shape variability. Overall, the investigation of these different strategies provides new contributions on how to find and exploit global descriptions of images and surfaces. From a global perspective, these objectives establish a link between: a) the first set of tools, that highlights the challenges in registering images with very large deformations, b) the second set of tools, that captures very large deformations between surfaces but are not applicable to images, and c) the third set of tools, that comes back on processing images and allows a natural construction of atlases from images with very large deformations. There are, however, several general remaining limitations, for instance, partial data (truncated or occluded) is currently not supported by the new tools, or also, the strategy for computing and using spectral representations still leaves room for improvement. This thesis gives new perspectives in cardiac and neuroimaging, yet at the same time, the new tools remain general enough for virtually any application that uses surface or image registration. It is recommended to research additional links with graph-based segmentation methods, which may lead to a complete framework for atlas construction where segmentation, registration and shape averaging are all interlinked. It is also recommended to pursue research on building better cardiac electromechanical models from the findings of this thesis. Nevertheless, the new tools provide new grounds for research and application of shape normalization, which may potentially impact diagnostic, as well as planning and performance of medical interventions

    Geographic Information Science (GIScience) and Geospatial Approaches for the Analysis of Historical Visual Sources and Cartographic Material

    Get PDF
    This book focuses on the use of GIScience in conjunction with historical visual sources to resolve past scenarios. The themes, knowledge gained and methodologies conducted might be of interest to a variety of scholars from the social science and humanities disciplines
    corecore