183 research outputs found

    A Label Field Fusion Bayesian Model and Its Penalized Maximum Rand Estimator for Image Segmentation

    Full text link

    Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm

    Get PDF
    This paper addresses the problem of estimating the Potts parameter B jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on B requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method the estimation of B is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating B jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of B. On the other hand, assuming that the value of B is known can degrade estimation performance significantly if this value is incorrect. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images

    AN OVERVIEW OF IMAGE SEGMENTATION ALGORITHMS

    Get PDF
    Image segmentation is a puzzled problem even after four decades of research. Research on image segmentation is currently conducted in three levels. Development of image segmentation methods, evaluation of segmentation algorithms and performance and study of these evaluation methods. Hundreds of techniques have been proposed for segmentation of natural images, noisy images, medical images etc. Currently most of the researchers are evaluating the segmentation algorithms using ground truth evaluation of (Berkeley segmentation database) BSD images. In this paper an overview of various segmentation algorithms is discussed. The discussion is mainly based on the soft computing approaches used for segmentation of images without noise and noisy images and the parameters used for evaluating these algorithms. Some of these techniques used are Markov Random Field (MRF) model, Neural Network, Clustering, Particle Swarm optimization, Fuzzy Logic approach and different combinations of these soft techniques

    Layered And Feature Based Image Segmentation Using Vector Filtering

    Get PDF
    A Sensor is a device that reads the attribute and changes it into a signal that can be simply examined by an observer or instrument. Sensors are worked in daily objects like touch-sensitive elevator buttons, road traffic monitoring system and so on. Each sensor would carry distinctive capabilities to utilize. The objects obtained in the sensor are tracked by many techniques which have been presented earlier. The techniques which make use of the information from diverse sensors normally termed as data fusion. The previous work defined the object tracking using Multi-Phase Joint Segmentation-Registration (MP JSR) technique for layered images. The downside of the previous work is that the MP JSR technique cannot be applied to the natural objects and the segmentation of the object is also being an inefficient one. To overcome the issues, here we are going to present an efficient joint motion segmentation and registration framework with integrated layer-based and feature-based motion estimation for precise data fusion in real image sequences and tracking of interested objects. Interested points are segmented with vector filtering using random samples of motion frames to derive candidate regions. The experimental evaluation is conducted with real image sequences samples to evaluate the effectiveness of data fusion using integrated layer and feature based image segmentation and registration of motion frames in terms of inter frame prediction, image layers, image clarity

    Contributions à la fusion de segmentations et à l’interprétation sémantique d’images

    Full text link
    Cette thèse est consacrée à l’étude de deux problèmes complémentaires, soit la fusion de segmentation d’images et l’interprétation sémantique d’images. En effet, dans un premier temps, nous proposons un ensemble d’outils algorithmiques permettant d’améliorer le résultat final de l’opération de la fusion. La segmentation d’images est une étape de prétraitement fréquente visant à simplifier la représentation d’une image par un ensemble de régions significatives et spatialement cohérentes (également connu sous le nom de « segments » ou « superpixels ») possédant des attributs similaires (tels que des parties cohérentes des objets ou de l’arrière-plan). À cette fin, nous proposons une nouvelle méthode de fusion de segmentation au sens du critère de l’Erreur de la Cohérence Globale (GCE), une métrique de perception intéressante qui considère la nature multi-échelle de toute segmentation de l’image en évaluant dans quelle mesure une carte de segmentation peut constituer un raffinement d’une autre segmentation. Dans un deuxième temps, nous présentons deux nouvelles approches pour la fusion des segmentations au sens de plusieurs critères en nous basant sur un concept très important de l’optimisation combinatoire, soit l’optimisation multi-objectif. En effet, cette méthode de résolution qui cherche à optimiser plusieurs objectifs concurremment a rencontré un vif succès dans divers domaines. Dans un troisième temps, afin de mieux comprendre automatiquement les différentes classes d’une image segmentée, nous proposons une approche nouvelle et robuste basée sur un modèle à base d’énergie qui permet d’inférer les classes les plus probables en utilisant un ensemble de segmentations proches (au sens d’un certain critère) issues d’une base d’apprentissage (avec des classes pré-interprétées) et une série de termes (d’énergie) de vraisemblance sémantique.This thesis is dedicated to study two complementary problems, namely the fusion of image segmentation and the semantic interpretation of images. Indeed, at first we propose a set of algorithmic tools to improve the final result of the operation of the fusion. Image segmentation is a common preprocessing step which aims to simplify the image representation into significant and spatially coherent regions (also known as segments or super-pixels) with similar attributes (such as coherent parts of objects or the background). To this end, we propose a new fusion method of segmentation in the sense of the Global consistency error (GCE) criterion. GCE is an interesting metric of perception that takes into account the multiscale nature of any segmentations of the image while measuring the extent to which one segmentation map can be viewed as a refinement of another segmentation. Secondly, we present two new approaches for merging multiple segmentations within the framework of multiple criteria based on a very important concept of combinatorial optimization ; the multi-objective optimization. Indeed, this method of resolution which aims to optimize several objectives concurrently has met with great success in many other fields. Thirdly, to better and automatically understand the various classes of a segmented image we propose an original and reliable approach based on an energy-based model which allows us to deduce the most likely classes by using a set of identically partitioned segmentations (in the sense of a certain criterion) extracted from a learning database (with pre-interpreted classes) and a set of semantic likelihood (energy) term

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    ADVANCED STATISTICAL LEARNING METHODS FOR HETEROGENEOUS MEDICAL IMAGING DATA

    Get PDF
    Most neuro-related diseases and disabling diseases display significant heterogeneity at the imaging and clinical scales. Characterizing such heterogeneity could transform our understanding of the etiology of these conditions and inspire new approaches to urgently needed preventions, diagnoses, and treatments. However, existing statistical methods face major challenges in delineating such heterogeneity at subject, group and study levels. In order to address these challenges, this work proposes several statistical learning methods for heterogeneous imaging data with different structures. First, we propose a dynamic spatial random effects model for longitudinal imaging dataset, which aims at characterizing both the imaging intensity progression and the temporal-spatial heterogeneity of diseased regions across subjects and time. The key components of proposed model include a spatial random effects model and a dynamic conditional random field model. The proposed model can effectively detect the dynamic diseased regions in each patient and present a dynamic statistical disease mapping within each subpopulation of interest. Second, to address the group level heterogeneity in non-Euclidean data, we develop a penalized model-based clustering framework to cluster high dimensional manifold data in symmetric spaces. Specifically, a mixture of geodesic factor analyzers is proposed with mixing proportions determined through a logistic model and Riemannian normal distribution in each component for data in symmetric spaces. Penalized likelihood approaches are used to realize variable selection procedures. We apply the proposed model to the ADNI hippocampal surface data, which shows excellent clustering performance and remarkably reveal meaningful clusters in the mixed population with controls and subjects with AD. Finally, to consider the potential heterogeneity caused by unobserved environmental, demographic and technical factors, we treat the imaging data as functional responses, and set up a surrogate variable analysis framework in functional linear models. A functional latent factor regression model is proposed. The confounding factors and the bias of local linear estimators caused by the confounding factors can be estimated and removed using singular value decomposition on residuals. We further develop a test for linear hypotheses of primary coefficient functions. Both simulation studies and ADNI hippocampal surface data analysis are conducted to show the performance of proposed method.Doctor of Philosoph
    • …
    corecore