24 research outputs found
Development of a tool for automatic segmentation of the cerebellum in MR images of children
The human cerebellar cortex is a highly foliated structure that supports both motor and complex cognitive functions in humans. Magnetic Resonance Imaging (MRI) is commonly used to explore structural alterations in patients with psychiatric and neurological diseases. The ability to detect regional structural differences in cerebellar lobules may provide valuable insights into disease biology, progression and response to treatment, but has been hampered by the lack of appropriate tools for performing automated structural cerebellar segmentation and morphometry. In this thesis, time intensive manual tracings by an expert neuroanatomist of 16 cerebellar regions on high-resolution T1-weighted MR images of 18 children aged 9-13 years were used to generate the Cape Town Pediatric Cerebellar Atlas (CAPCA18) in the age-appropriate National Institute of Health Pediatric Database (NIHPD) asymmetric template space. An automated pipeline was developed to process the MR images and generate lobule-wise segmentations, as well as a measure of the uncertainty of the label assignments. Validation in an independent group of children with ages similar to those of the children used in the construction of the atlas, yielded spatial overlaps with manual segmentations greater than 70% in all lobules, except lobules VIIb and X. Average spatial overlap of the whole cerebellar cortex was 86%, compared to 78% using the alternative Spatially Unbiased Infra-tentorial Template (SUIT), which was developed using adult images
Combinatorial optimisation for arterial image segmentation.
Cardiovascular disease is one of the leading causes of the mortality in the western world. Many imaging modalities have been used to diagnose cardiovascular diseases. However, each has different forms of noise and artifacts that make the medical image analysis field important and challenging. This thesis is concerned with developing fully automatic segmentation methods for cross-sectional coronary arterial imaging in particular, intra-vascular ultrasound and optical coherence tomography, by incorporating prior and tracking information without any user intervention, to effectively overcome various image artifacts and occlusions. Combinatorial optimisation methods are proposed to solve the segmentation problem in polynomial time. A node-weighted directed graph is constructed so that the vessel border delineation is considered as computing a minimum closed set. A set of complementary edge and texture features is extracted. Single and double interface segmentation methods are introduced. Novel optimisation of the boundary energy function is proposed based on a supervised classification method. Shape prior model is incorporated into the segmentation framework based on global and local information through the energy function design and graph construction. A combination of cross-sectional segmentation and longitudinal tracking is proposed using the Kalman filter and the hidden Markov model. The border is parameterised using the radial basis functions. The Kalman filter is used to adapt the inter-frame constraints between every two consecutive frames to obtain coherent temporal segmentation. An HMM-based border tracking method is also proposed in which the emission probability is derived from both the classification-based cost function and the shape prior model. The optimal sequence of the hidden states is computed using the Viterbi algorithm. Both qualitative and quantitative results on thousands of images show superior performance of the proposed methods compared to a number of state-of-the-art segmentation methods
Advances in Detection and Classification of Underwater Targets using Synthetic Aperture Sonar Imagery
In this PhD thesis, the problem of underwater mine detection and classification using
synthetic aperture sonar (SAS) imagery is considered. The automatic detection and
automatic classification (ADAC) system is applied to images obtained by SAS systems.
The ADAC system contains four steps, namely mine-like object (MLO) detection, image
segmentation, feature extraction, and mine type classification. This thesis focuses
on the last three steps.
In the mine-like object detection step, a template-matching technique based on the a
priori knowledge of mine shapes is applied to scan the sonar imagery for the detection
of MLOs. Regions containing MLOs are called regions of interest (ROI). They are
extracted and forwarded to the subsequent steps, i.e. image segmentation and feature
extraction.
In the image segmentation step, a modified expectation-maximization (EM) approach
is proposed. For the sake of acquiring the shape information of the MLO in the ROI, the
SAS images are segmented into highlights, shadows, and backgrounds. A generalized
mixture model is adopted to approximate the statistics of the image data. In addition,
a Dempster-Shafer theory-based clustering technique is used to consider the spatial
correlation between pixels so that the clutters in background regions can be removed.
Optimal parameter settings for the proposed EM approach are found with the help of
quantitative numerical studies.
In the feature extraction step, features are extracted and will be used as the inputs
for the mine type classification step. Both the geometrical features and the texture
features are applied. However, there are numerous features proposed to describe the
object shape and the texture in the literature.
Due to the curse of dimensionality, it is indispensable to do the feature selection during
the design of an ADAC system. A sophisticated filter method is developed to choose
optimal features for the classification purpose. This filter method utilizes a novel
feature relevance measure that is a combination of the mutual information, the modified
Relief weight, and the Shannon entropy. The selected features demonstrate a higher
generalizability. Compared with other filter methods, the features selected by our
method can lead to superior classification accuracy, and their performance variation
over different classifiers is decreased.
In the mine type classification step, the prediction of the types of MLO is considered. In
order to take advantage of the complementary information among different classifiers, a classifier combination scheme is developed in the framework of the Dempster-Shafer
theory. The outputs of individual classifiers are combined according to this classifier
combination scheme. The resulting classification accuracy is better than those of
individual classifiers.
All of the proposed methods are evaluated using SAS data. Finally, conclusions are
drawn, and some suggestions about future works are proposed as well
Aplicação de uma métrica de similaridade não linear em algoritmos de segmentação
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, 2015.Um dos principais processos utilizados no campo de processamento digital de imagens é a segmentação, processo no qual a imagem é separada em seus elementos ou partes constituintes. Na literatura, existem diferentes e bem conhecidos métodos usados para segmentação, tais como clusterização, limiarização, segmentação com redes neurais e segmentação por crescimento de regiões . No intuito de melhorar de melhorar o desempenho dos algoritmos de segmentação, um estudo sobre o efeito da aplicação de uma métrica não linear em algoritmos de segmentação foi realizado neste trabalho. Foram selecionados três algoritmos de segmentação (Mumford-Shah, Color Structure Code e Felzenszwalb and Huttenlocher) provenientes do método de crescimento de regiões e nestes se alterou a parte de análise de similaridade utilizando para tal uma métrica não linear. A métrica não linear utilizada, denominada Polinomial Mahalanobis, é uma variação da distância de Mahalanobis utilizada para medir a distância estatística entre distribuições. Uma avaliação qualitativa e uma análise empírica foram realizadas neste trabalho para comparar os resultados obtidos em termos de eficácia. Os resultados desta comparação, apresentados neste estudo, apontam uma melhoria nos resultados de segmentação obtidos pela abordagem proposta. Em termos de eficiência, foram analisados os tempos de execução dos algoritmos com e sem o aprimoramento e os resultados desta análise mostraram um aumento do tempo de execução dos algoritmos com abordagem proposta.Abstract : One of the main procedures used on digital image processing is segmentation,where the image is split into its constituent parts or objects. In the literature,there are different well-known methods used for segmentation, suchas clustering, thresholding, segmentation using neural network and segmentationusing region growing. Aiming to improve the performance of the segmentationalgorithms, a study off the effect of the application of a non-linearmetric on segmentation algorithms was performed in this work. Three segmentationalgorithms were chosen (Mumford-Shah, Color Structure Code,Felzenszwalb and Huttenlocher) originating from region growing techniques,and on those the similarity metric was enhanced with a non-linear metric.The non-linear metric used, known as Polynomial Mahalanobis, is a variationfrom the statistical Mahalanobis distance used for measure the distancebetween distributions. A qualitative evaluation and empirical analysis wasperformed in this work to compare the obtained results in terms of efficacy.The results from these comparison, presented in this study, indicate an improvementon the segmentation result obtained by the proposed approach. Interms of efficiency, the execution time of the algorithms with and without theproposed improvement were analyzed and the result of this analysis showedan increase of the execution time for the algorithms with the proposed approach
Robust density modelling using the student's t-distribution for human action recognition
The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
Characterising pattern asymmetry in pigmented skin lesions
Abstract. In clinical diagnosis of pigmented skin lesions asymmetric pigmentation is often indicative of
melanoma. This paper describes a method and measures for characterizing lesion symmetry. The estimate of
mirror symmetry is computed first for a number of axes at different degrees of rotation with respect to the
lesion centre. The statistics of these estimates are the used to assess the overall symmetry. The method is
applied to three different lesion representations showing the overall pigmentation, the pigmentation pattern,
and the pattern of dermal melanin. The best measure is a 100% sensitive and 96% specific indicator of
melanoma on a test set of 33 lesions, with a separate training set consisting of 66 lesions
Soft computing applied to optimization, computer vision and medicine
Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices
Estimation de mouvement sans restriction par filtres en quadrature localisés
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal
The uncertainty of changepoints in time series
Analysis concerning time series exhibiting changepoints have predominantly
focused on detection and estimation. However, changepoint estimates such as their
number and location are subject to uncertainty which is often not captured explicitly,
or requires sampling long latent vectors in existing methods. This thesis proposes
efficient, flexible methodologies in quantifying the uncertainty of changepoints.
The core proposed methodology of this thesis models time series and changepoints
under a Hidden Markov Model framework. This methodology combines existing
work on exact changepoint distributions conditional on model parameters with
Sequential Monte Carlo samplers to account for parameter uncertainty. The combination
of the two provides posterior distributions of changepoint characteristics in
light of parameter uncertainty.
This thesis also presents a methodology in approximating the posterior of
the number of underlying states in a Hidden Markov Model. Consequently, model
selection for Hidden Markov Models is possible. This methodology employs the use
of Sequential Monte Carlo samplers, such that no additional computational costs
are incurred from the existing use of these samplers.
The final part of this thesis considers time series in the wavelet domain, as opposed
to the time domain. The motivation for this transformation is the occurrence
of autocovariance changepoints in time series. Time domain modelling approaches
are somewhat limited for such types of changes, with approximations often taking
place. The wavelet domain relaxes these modelling limitations, such that autocovariance
changepoints can be considered more readily. The proposed methodology
develops a joint density for multiple processes in the wavelet domain which can
then be embedded within a Hidden Markov Model framework. Quantifying the
uncertainty of autocovariance changepoints is thus possible.
These methodologies will be motivated by datasets from Econometrics, Neuroimaging
and Oceanography