303 research outputs found

    Segmentation non supervisée d'images non stationnaires avec champs de Markov évidentiels

    Get PDF
    - Fréquemment utilisés en traitement statistique d'images, les champs de Markov cachés (CMC) sont des outils puissants qui peuvent fournir des résultats remarquables. Cette qualité est principalement due à l'aptitude du modèle de prendre en compte des dépendances spatiales des variables aléatoires, même lorsqu'elles sont en très grand nombre, pouvant dépasser le milion. Dans un tel modèle le champ caché X est supposé markovien et doit être estimé à partir du champ observé Y . Un tel traitement est possible du fait de la markovianité de X conditionnellement Y . Ce modèle a été ensuite généralisé au champs de Markov couples (CMCouple), où l'on suppose directement la markovianité du couple (X,Y), qui offrent les mêmes possibilités de traitements que les CMC et permettent de mieux modéliser le bruit ce qui permet, en particulier, de mieux prendre en compte l'existence des textures. Par la suite, les CMCouples ont été généralisés aux champs de Markov triplet (CMT), où la loi du couple (X,Y) est une loi marginale d'un champ de Markov triplet T = (X,U,Y), avec un champ auxiliaire U . Par ailleurs, la théorie de l'évidence peut permettre une amélioration des résultats obtenus par des traitements bayésiens dans certaines situations. Le but de cet article est d'aborder le problème de la segmentation non supervisée d'images non stationnaires en utilisant les champs de Markov évidentiels (CME), en exploitant, en particulier, un lien existant entre les CME et les CMT

    Making use of partial knowledge about hidden states in HMMs : an approach based on belief functions.

    No full text
    International audienceThis paper addresses the problem of parameter estimation and state prediction in Hidden Markov Models (HMMs) based on observed outputs and partial knowledge of hidden states expressed in the belief function framework. The usual HMM model is recovered when the belief functions are vacuous. Parameters are learnt using the Evidential Expectation- Maximization algorithm, a recently introduced variant of the Expectation-Maximization algorithm for maximum likelihood estimation based on uncertain data. The inference problem, i.e., finding the most probable sequence of states based on observed outputs and partial knowledge of states, is also addressed. Experimental results demonstrate that partial information about hidden states, when available, may substantially improve the estimation and prediction performances

    Unsupervised SAR Image Segmentation Based on a Hierarchical TMF Model in the Discrete Wavelet Domain for Sea Area Detection

    Get PDF
    Unsupervised synthetic aperture radar (SAR) image segmentation is a fundamental preliminary processing step required for sea area detection in military applications. The purpose of this step is to classify large image areas into different segments to assist with identification of the sea area and the ship target within the image. The recently proposed triplet Markov field (TMF) model has been successfully used for segmentation of nonstationary SAR images. This letter presents a hierarchical TMF model in the discrete wavelet domain of unsupervised SAR image segmentation for sea area detection, which we have named the wavelet hierarchical TMF (WHTMF) model. The WHTMF model can precisely capture the global and local image characteristics in the two-pass computation of posterior distribution. The multiscale likelihood and the multiscale energy function are constructed to capture the intrascale and intrascale dependencies in a random field (X,U). To model the SAR data related to radar backscattering sources, the Gaussian distribution is utilized. The effectiveness of the proposed model for SAR image segmentation is evaluated using synthesized and real SAR data

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    Selective subtraction for handheld cameras

    Get PDF
    © 2013 IEEE. Background subtraction techniques model the background of the scene using the stationarity property and classify the scene into two classes namely foreground and background. In doing so, most moving objects become foreground indiscriminately, except in dynamic scenes (such as those with some waving tree leaves, water ripples, or a water fountain), which are typically \u27learned\u27 as part of the background using a large training set of video data. We introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a \u27selective subtraction\u27 method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Furthermore, the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We extend this idea to allow multiple reference planes resulting in multiple foregrounds or backgrounds. We present diverse set of examples to show that: 1) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; 2) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one. Furthermore, we show that this technique is relatively immune to camera motion and performs well for hand-held cameras

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions
    corecore