47 research outputs found
Gaussian Mixture Model based Spatial Information Concept for Image Segmentation
Segmentation of images has found widespread applications in image recognition systems. Over the last two decades, there has been a growing research interest in model-based technique. In this technique, standard Gaussian mixture model (GMM) is a well-known method for image segmentation. The model assumes a common prior distribution, which independently generates the pixel labels. In addition, the spatial relationship between neighboring pixels is not taken into account of the standard GMM. For this reason, its segmentation result is sensitive to noise. To reduce the sensitivity of the segmented result with respect to noise, Markov Random Field (MRF) models provide a powerful way to account for spatial dependencies between image pixels. However, their main drawback is that they are computationally expensive to implement. Based on these considerations, in the first part of this thesis (Chapter 4), we propose an extension of the standard GMM for image segmentation, which utilizes a novel approach to incorporate the spatial relationships between neighboring pixels into the standard GMM. The proposed model is easy to implement and compared with the existing MRF models, requires lesser number of parameters. We also propose a new method to estimate the model parameters in order to minimize the higher bound on the data negative log-likelihood, based on the gradient method. Experimental results obtained on noisy synthetic and real world grayscale images demonstrate the robustness, accuracy and effectiveness of the proposed model in image segmentation. In the final part of this thesis (Chapter 5), another way to incorporate spatial information between the neighboring pixels into the GMM based on MRF is proposed. In comparison to other mixture models that are complex and computationally expensive, the proposed method is robust and fast to implement. In mixture models based on MRF, the M-step of the EM algorithm cannot be directly applied to the prior distribution for maximization of the log-likelihood with respect to the corresponding parameters. Compared with these models, our proposed method directly applies the EM algorithm to optimize the parameters, which makes it much simpler. Finally, our approach is used to segment many images with excellent results
Unsupervised amplitude and texture based classification of SAR images with multinomial latent model
We combine both amplitude and texture statistics of the Synthetic Aperture Radar (SAR) images for classification purpose. We use Nakagami density to model the class amplitudes and a non-Gaussian Markov Random Field (MRF) texture model with t-distributed regression error to model the textures of the classes. A non-stationary Multinomial Logistic (MnL) latent class label model is used as a mixture density to obtain spatially smooth class segments. The Classification Expectation-Maximization (CEM) algorithm is performed to estimate the class parameters and to classify the pixels. We resort to Integrated Classification Likelihood (ICL) criterion to determine the number of classes in the model. We obtained some classification results of water, land and urban areas in both supervised and unsupervised cases on TerraSAR-X, as well as COSMO-SkyMed data
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
Unsupervised amplitude and texture classification of SAR images with multinomial latent model
International audienceWe combine both amplitude and texture statistics of the Synthetic Aperture Radar (SAR) images for modelbased classification purpose. In a finite mixture model, we bring together the Nakagami densities to model the class amplitudes and a 2D Auto-Regressive texture model with t-distributed regression error to model the textures of the classes. A nonstationary Multinomial Logistic (MnL) latent class label model is used as a mixture density to obtain spatially smooth class segments. The Classification Expectation-Maximization (CEM) algorithm is performed to estimate the class parameters and to classify the pixels. We resort to Integrated Classification Likelihood (ICL) criterion to determine the number of classes in the model. We present our results on the classification of the land covers obtained in both supervised and unsupervised cases processing TerraSAR-X, as well as COSMO-SkyMed data
Recommended from our members
Simulation of sea-state sequences
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The present PhD study, in its first part, uses artificial neural networks (ANNs), an optimization technique called simulated annealing, and statistics to simulate the significant wave height (Hs) and mean zero-up-crossing period ( ) of 3-hourly sea-states of a location in the North East Pacific using a proposed distribution called hepta-parameter spline distribution for the conditional distribution of Hs or given some inputs. Two different seven- network sets of ANNs for the simulation and prediction of Hs and were trained using 20-year observed Hs’s and ’s. The preceding Hs’s and ’s were the most important inputs given to the networks, but the starting day of the simulated period was also necessary. However, the code replaced the day with the corresponding time and the season. The networks were trained by a simulated annealing algorithm and the outputs of the two sets of networks were used for calculating the parameters of the probability density function (pdf) of the proposed hepta-parameter distribution. After the calculation of the seven parameters of the pdf from the network outputs, the Hs and of the future sea-state is predicted by generating random numbers from the corresponding pdf.
In another part of the thesis, vertical piles have been studied with the goal of identifying the range of sea-states suitable for the safe pile driving operation. Pile configuration including the non-linear foundation and the gap between the pile and the pile sleeve shims were modeled using the finite elements analysis facilities within ABAQUS. Dynamic analyses of the system for a sea-state characterized by Hs and and modeled as a combination of several wave components were performed. A table of safe and unsafe sea-states was generated by repeating the analysis for various sea-states. If the prediction for a particular sea-state is repeated N times of which n times prove to be safe, then it could be said that the predicted sea-state is safe with the probability of 100(n/N).
The last part of the thesis deals with the Hs return values. The return value is a widely used measure of wave extremes having an important role in determining the design wave used in the design of maritime structures. In this part, Hs return value was calculated demonstrating another application of the above simulation of future 3-hourly Hs’s. The maxima method for calculating return values was applied in such a way that avoids the conventional need for unrealistic assumptions. The significant wave height return value has also been calculated using the convolution concept from a model presented by Anderson et al. (2001)
Maximum likelihood estimation of robust constrained Gaussian mixture models
Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Ph. D.) -- Bilkent University, 2013.Includes bibliographical references leaves 155-170.Density estimation using Gaussian mixture models presents a fundamental
trade off between the flexibility of the model and its sensitivity to the unwanted/unmodeled
data points in the data set. The expectation maximization
(EM) algorithm used to estimate the parameters of Gaussian mixture models is
prone to local optima due to nonconvexity of the problem and the improper selection
of parameterization. We propose a novel modeling framework, three different
parameterizations and novel algorithms for the constrained Gaussian mixture
density estimation problem based on the expectation maximization algorithm,
convex duality theory and the stochastic search algorithms. We propose a new
modeling framework called Constrained Gaussian Mixture Models (CGMM) that
incorporates prior information into the density estimation problem in the form
of convex constraints on the model parameters. In this context, we consider two
different parameterizations where the first set of parameters are referred to as the
information parameters and the second set of parameters are referred to as the
source parameters. To estimate the parameters, we use the EM algorithm where
we solve two optimization problems alternatingly in the E-step and the M-step.
We show that the M-step corresponds to a convex optimization problem in the
information parameters. We form a dual problem for the M-step and show that
the dual problem corresponds to a convex optimization problem in the source
parameters. We apply the CGMM framework to two different problems: Robust
density estimation and compound object detection problems. In the robust density
estimation problem, we incorporate the inlier/outlier information available
for small number of data points as convex constraints on the parameters using
the information parameters. In the compound object detection problem, we incorporate
the relative size, spectral distribution structure and relative location
relations of primitive objects as convex constraints on the parameters using the
source parameters. Even with the propoper selection of the parameterization, density estimation problem for Gaussian mixture models is not jointly convex
in both the E-step variables and the M-step variables. We propose a third parameterization
based on eigenvalue decomposition of covariance matrices which is
suitable for stochastic search algorithms in general and particle swarm optimization
(PSO) algorithm in particular. We develop a new algorithm where global
search skills of the PSO algorithm is incorporated into the EM algorithm to do
global parameter estimation. In addition to the mathematical derivations, experimental
results on synthetic and real-life data sets verifying the performance of
the proposed algorithms are provided.Arı, ÇağlarPh.D
Machine Learning
Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function
Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term