40 research outputs found

    Contributions à la fusion de segmentations et à l’interprétation sémantique d’images

    Full text link
    Cette thèse est consacrée à l’étude de deux problèmes complémentaires, soit la fusion de segmentation d’images et l’interprétation sémantique d’images. En effet, dans un premier temps, nous proposons un ensemble d’outils algorithmiques permettant d’améliorer le résultat final de l’opération de la fusion. La segmentation d’images est une étape de prétraitement fréquente visant à simplifier la représentation d’une image par un ensemble de régions significatives et spatialement cohérentes (également connu sous le nom de « segments » ou « superpixels ») possédant des attributs similaires (tels que des parties cohérentes des objets ou de l’arrière-plan). À cette fin, nous proposons une nouvelle méthode de fusion de segmentation au sens du critère de l’Erreur de la Cohérence Globale (GCE), une métrique de perception intéressante qui considère la nature multi-échelle de toute segmentation de l’image en évaluant dans quelle mesure une carte de segmentation peut constituer un raffinement d’une autre segmentation. Dans un deuxième temps, nous présentons deux nouvelles approches pour la fusion des segmentations au sens de plusieurs critères en nous basant sur un concept très important de l’optimisation combinatoire, soit l’optimisation multi-objectif. En effet, cette méthode de résolution qui cherche à optimiser plusieurs objectifs concurremment a rencontré un vif succès dans divers domaines. Dans un troisième temps, afin de mieux comprendre automatiquement les différentes classes d’une image segmentée, nous proposons une approche nouvelle et robuste basée sur un modèle à base d’énergie qui permet d’inférer les classes les plus probables en utilisant un ensemble de segmentations proches (au sens d’un certain critère) issues d’une base d’apprentissage (avec des classes pré-interprétées) et une série de termes (d’énergie) de vraisemblance sémantique.This thesis is dedicated to study two complementary problems, namely the fusion of image segmentation and the semantic interpretation of images. Indeed, at first we propose a set of algorithmic tools to improve the final result of the operation of the fusion. Image segmentation is a common preprocessing step which aims to simplify the image representation into significant and spatially coherent regions (also known as segments or super-pixels) with similar attributes (such as coherent parts of objects or the background). To this end, we propose a new fusion method of segmentation in the sense of the Global consistency error (GCE) criterion. GCE is an interesting metric of perception that takes into account the multiscale nature of any segmentations of the image while measuring the extent to which one segmentation map can be viewed as a refinement of another segmentation. Secondly, we present two new approaches for merging multiple segmentations within the framework of multiple criteria based on a very important concept of combinatorial optimization ; the multi-objective optimization. Indeed, this method of resolution which aims to optimize several objectives concurrently has met with great success in many other fields. Thirdly, to better and automatically understand the various classes of a segmented image we propose an original and reliable approach based on an energy-based model which allows us to deduce the most likely classes by using a set of identically partitioned segmentations (in the sense of a certain criterion) extracted from a learning database (with pre-interpreted classes) and a set of semantic likelihood (energy) term

    3D Shape Modeling Using High Level Descriptors

    Get PDF

    Consensus ou fusion de segmentation pour quelques applications de détection ou de classification en imagerie

    Full text link
    Récemment, des vraies mesures de distances, au sens d’un certain critère (et possédant de bonnes propriétés asymptotiques) ont été introduites entre des résultats de partitionnement (clustering) de donnés, quelquefois indexées spatialement comme le sont les images segmentées. À partir de ces métriques, le principe de segmentation moyenne (ou consensus) a été proposée en traitement d’images, comme étant la solution d’un problème d’optimisation et une façon simple et efficace d’améliorer le résultat final de segmentation ou de classification obtenues en moyennant (ou fusionnant) différentes segmentations de la même scène estimée grossièrement à partir de plusieurs algorithmes de segmentation simples (ou identiques mais utilisant différents paramètres internes). Ce principe qui peut se concevoir comme un débruitage de données d’abstraction élevée, s’est avéré récemment une alternative efficace et très parallélisable, comparativement aux méthodes utilisant des modèles de segmentation toujours plus complexes et plus coûteux en temps de calcul. Le principe de distance entre segmentations et de moyennage ou fusion de segmentations peut être exploité, directement ou facilement adapté, par tous les algorithmes ou les méthodes utilisées en imagerie numérique où les données peuvent en fait se substituer à des images segmentées. Cette thèse a pour but de démontrer cette assertion et de présenter différentes applications originales dans des domaines comme la visualisation et l’indexation dans les grandes bases d’images au sens du contenu segmenté de chaque image, et non plus au sens habituel de la couleur et de la texture, le traitement d’images pour améliorer sensiblement et facilement la performance des méthodes de détection du mouvement dans une séquence d’images ou finalement en analyse et classification d’images médicales avec une application permettant la détection automatique et la quantification de la maladie d’Alzheimer à partir d’images par résonance magnétique du cerveau.Recently, some true metrics in a criterion sense (with good asymptotic properties) were introduced between data partitions (or clusterings) even for data spatially ordered such as image segmentations. From these metrics, the notion of average clustering (or consensus segmentation) was then proposed in image processing as the solution of an optimization problem and a simple and effective way to improve the final result of segmentation or classification obtained by averaging (or fusing) different segmentations of the same scene which are roughly estimated from several simple segmentation models (or obtained with the same model but with different internal parameters). This principle, which can be conceived as a denoising of high abstraction data, has recently proved to be an effective and very parallelizable alternative, compared to methods using ever more complex and time-consuming segmentation models. The principle of distance between segmentations, and averaging of segmentations, in a criterion sense, can be exploited, directly or easily adapted, by all the algorithms or methods used in digital imaging where data can in fact be substituted to segmented images. This thesis proposal aims at demonstrating this assertion and to present different original applications in various fields in digital imagery such as the visualization and the indexation in the image databases, in the sense of the segmented contents of each image, and no longer in the common color and texture sense, or in image processing in order to sensibly and easily improve the detection of movement in the image sequence or finally in analysis and classification in medical imaging with an application allowing the automatic detection and quantification of Alzheimer’s disease

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    The computer network faults classification using a novel hybrid classifier

    Get PDF

    Beyond Quantity: Research with Subsymbolic AI

    Get PDF
    How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately

    Automated atlas-based segmentation of brain structures in MR images

    Get PDF

    Automated atlas-based segmentation of brain structures in MR images

    Get PDF
    corecore