8 research outputs found
Retinal Blood Vessel Segmentation Using Ensemble of Single Oriented Mask Filters
This paper describes a method on segmentation of blood vessel in retinal images using supervised approach. Blood vessel segmentation in retinal images can be used for analyses in diabetic retinopathy automated screening. It is a very exhausting job and took a very long time to segment retinal blood vessels manually. Moreover these tasks also requires training and skills. The strategy involves the applications of Support Vector Machine to classify each pixel whether it belongs to a vessel or not. Single mask filters which consist of intensity values of normalized green channel have been generated according to the direction of angles. These single oriented mask filters contain the vectors of the neighbourhood of each pixel. Five images randomly selected from DRIVE database are used to train the classifier. Every single oriented mask filters are ranked according to the average accuracy of training images and their weights are assigned based on this rank. Ensemble approaches that are Addition With Weight and Product With Weight have been used to combine all these single mask filters. In order to test the proposed approach, two standard databases, DRIVE and STARE have been used. The results of the proposed method clearly show improvement compared to other single oriented mask filters
Document Image Binarization Process
Technology has made significant strides in recent years, which accounts for how pervasive it is in our daily lives. In order to address the fundamental issue with historical document preservation, namely their degeneration, this work suggests using new technology. The method is built on pieces of artificial intelligence that can read the writing from a page and recognize the useful information, converting it into a digital version. Contrary to photographing or scanning, binarizing a document is a considerably more effective method, both in terms of quality—the legibility of the writing—and quantity—the amount of memory needed to retain the resulting image. According to common assessment measures, the suggested fully convolutional network manages to deliver results that are comparable to those of other solutions of a similar nature.</em
Arbitrary Keyword Spotting in Handwritten Documents
Despite the existence of electronic media in today’s world, a considerable amount of written communications is in paper form such as books, bank cheques, contracts, etc. There is an increasing demand for the automation of information extraction, classification, search, and retrieval of documents. The goal of this research is to develop a complete methodology for the spotting of arbitrary keywords in handwritten document images.
We propose a top-down approach to the spotting of keywords in document images. Our approach is composed of two major steps: segmentation and decision. In the former, we generate the word hypotheses. In the latter, we decide whether a generated word hypothesis is a specific keyword or not. We carry out the decision step through a two-level classification where first, we assign an input image to a keyword or non-keyword class; and then transcribe the image if it is passed as a keyword. By reducing the problem from the image domain to the text domain, we do not only address the search problem in handwritten documents, but also the classification and retrieval, without the need for the transcription of the whole document image.
The main contribution of this thesis is the development of a generalized minimum edit distance for handwritten words, and to prove that this distance is equivalent to an Ergodic Hidden Markov Model (EHMM). To the best of our knowledge, this work is the first to present an exact 2D model for the temporal information in handwriting while satisfying practical constraints.
Some other contributions of this research include: 1) removal of page margins based on corner detection in projection profiles; 2) removal of noise patterns in handwritten images using expectation maximization and fuzzy inference systems; 3) extraction of text lines based on fast Fourier-based steerable filtering; 4) segmentation of characters based on skeletal graphs; and 5) merging of broken characters based on graph partitioning.
Our experiments with a benchmark database of handwritten English documents and a real-world collection of handwritten French documents indicate that, even without any word/document-level training, our results are comparable with two state-of-the-art word spotting systems for English and French documents
Computational Analysis of Fundus Images: Rule-Based and Scale-Space Models
Fundus images are one of the most important imaging examinations in modern ophthalmology
because they are simple, inexpensive and, above all, noninvasive.
Nowadays, the acquisition and
storage of highresolution
fundus images is relatively easy and fast. Therefore, fundus imaging
has become a fundamental investigation in retinal lesion detection, ocular health monitoring and
screening programmes. Given the large volume and clinical complexity associated with these images,
their analysis and interpretation by trained clinicians becomes a timeconsuming
task and is
prone to human error. Therefore, there is a growing interest in developing automated approaches
that are affordable and have high sensitivity and specificity. These automated approaches need to
be robust if they are to be used in the general population to diagnose and track retinal diseases. To
be effective, the automated systems must be able to recognize normal structures and distinguish
them from pathological clinical manifestations.
The main objective of the research leading to this thesis was to develop automated systems capable
of recognizing and segmenting retinal anatomical structures and retinal pathological clinical
manifestations associated with the most common retinal diseases. In particular, these automated
algorithms were developed on the premise of robustness and efficiency to deal with the difficulties
and complexity inherent in these images. Four objectives were considered in the analysis of
fundus images. Segmentation of exudates, localization of the optic disc, detection of the midline
of blood vessels, segmentation of the vascular network and detection of microaneurysms.
In addition, we also evaluated the detection of diabetic retinopathy on fundus images using the
microaneurysm detection method. An overview of the state of the art is presented to compare the
performance of the developed approaches with the main methods described in the literature for
each of the previously described objectives. To facilitate the comparison of methods, the state of
the art has been divided into rulebased
methods and machine learningbased
methods.
In the research reported in this paper, rulebased
methods based on image processing methods
were preferred over machine learningbased
methods. In particular, scalespace
methods proved
to be effective in achieving the set goals.
Two different approaches to exudate segmentation were developed. The first approach is based on
scalespace
curvature in combination with the local maximum of a scalespace
blob detector and
dynamic thresholds. The second approach is based on the analysis of the distribution function of
the maximum values of the noise map in combination with morphological operators and adaptive
thresholds. Both approaches perform a correct segmentation of the exudates and cope well with
the uneven illumination and contrast variations in the fundus images.
Optic disc localization was achieved using a new technique called cumulative sum fields, which was
combined with a vascular enhancement method. The algorithm proved to be reliable and efficient,
especially for pathological images. The robustness of the method was tested on 8 datasets.
The detection of the midline of the blood vessels was achieved using a modified corner detector
in combination with binary philtres and dynamic thresholding. Segmentation of the vascular network
was achieved using a new scalespace
blood vessels enhancement method. The developed
methods have proven effective in detecting the midline of blood vessels and segmenting vascular
networks.
The microaneurysm detection method relies on a scalespace
microaneurysm detection and labelling
system. A new approach based on the neighbourhood of the microaneurysms was used
for labelling. Microaneurysm detection enabled the assessment of diabetic retinopathy detection.
The microaneurysm detection method proved to be competitive with other methods, especially with highresolution
images. Diabetic retinopathy detection with the developed microaneurysm
detection method showed similar performance to other methods and human experts.
The results of this work show that it is possible to develop reliable and robust scalespace
methods
that can detect various anatomical structures and pathological features of the retina. Furthermore,
the results obtained in this work show that although recent research has focused on machine learning
methods, scalespace
methods can achieve very competitive results and typically have greater
independence from image acquisition. The methods developed in this work may also be relevant
for the future definition of new descriptors and features that can significantly improve the results
of automated methods.As imagens do fundo do olho são hoje um dos principais exames imagiológicos da oftalmologia
moderna, pela sua simplicidade, baixo custo e acima de tudo pelo seu carácter nãoinvasivo.
A
aquisição e armazenamento de imagens do fundo do olho com alta resolução é também relativamente
simples e rápida. Desta forma, as imagens do fundo do olho são um exame fundamental
na identificação de alterações retinianas, monitorização da saúde ocular, e em programas de rastreio.
Considerando o elevado volume e complexidade clínica associada a estas imagens, a análise
e interpretação das mesmas por clínicos treinados tornase
uma tarefa morosa e propensa a erros
humanos. Assim, há um interesse crescente no desenvolvimento de abordagens automatizadas,
acessíveis em custo, e com uma alta sensibilidade e especificidade. Estas devem ser robustas para
serem aplicadas à população em geral no diagnóstico e seguimento de doenças retinianas. Para
serem eficazes, os sistemas de análise têm que conseguir detetar e distinguir estruturas normais
de sinais patológicos.
O objetivo principal da investigação que levou a esta tese de doutoramento é o desenvolvimento
de sistemas automáticos capazes de detetar e segmentar as estruturas anatómicas da retina, e os
sinais patológicos retinianos associados às doenças retinianas mais comuns. Em particular, estes
algoritmos automatizados foram desenvolvidos segundo as premissas de robustez e eficácia para
lidar com as dificuldades e complexidades inerentes a estas imagens.
Foram considerados quatro objetivos de análise de imagens do fundo do olho. São estes, a segmentação
de exsudados, a localização do disco ótico, a deteção da linha central venosa dos vasos
sanguíneos e segmentação da rede vascular, e a deteção de microaneurismas. De acrescentar que
usando o método de deteção de microaneurismas, avaliouse
também a capacidade de deteção da
retinopatia diabética em imagens do fundo do olho.
Para comparar o desempenho das metodologias desenvolvidas neste trabalho, foi realizado um
levantamento do estado da arte, onde foram considerados os métodos mais relevantes descritos na
literatura para cada um dos objetivos descritos anteriormente. Para facilitar a comparação entre
métodos, o estado da arte foi dividido em metodologias de processamento de imagem e baseadas
em aprendizagem máquina.
Optouse
no trabalho de investigação desenvolvido pela utilização de metodologias de análise espacial
de imagem em detrimento de metodologias baseadas em aprendizagem máquina. Em particular,
as metodologias baseadas no espaço de escalas mostraram ser efetivas na obtenção dos
objetivos estabelecidos.
Para a segmentação de exsudados foram usadas duas abordagens distintas. A primeira abordagem
baseiase
na curvatura em espaço de escalas em conjunto com a resposta máxima local de um detetor
de manchas em espaço de escalas e limiares dinâmicos. A segunda abordagem baseiase
na
análise do mapa de distribuição de ruído em conjunto com operadores morfológicos e limiares
adaptativos. Ambas as abordagens fazem uma segmentação dos exsudados de elevada precisão,
além de lidarem eficazmente com a iluminação nãouniforme
e a variação de contraste presente
nas imagens do fundo do olho. A localização do disco ótico foi conseguida com uma nova técnica
designada por campos de soma acumulativos, combinada com métodos de melhoramento da rede
vascular. O algoritmo revela ser fiável e eficiente, particularmente em imagens patológicas. A robustez
do método foi verificada pela sua avaliação em oito bases de dados. A deteção da linha central
dos vasos sanguíneos foi obtida através de um detetor de cantos modificado em conjunto com
filtros binários e limiares dinâmicos. A segmentação da rede vascular foi conseguida com um novo
método de melhoramento de vasos sanguíneos em espaço de escalas. Os métodos desenvolvidos mostraram ser eficazes na deteção da linha central dos vasos sanguíneos e na segmentação da rede
vascular. Finalmente, o método para a deteção de microaneurismas assenta num formalismo de
espaço de escalas na deteção e na rotulagem dos microaneurismas. Para a rotulagem foi utilizada
uma nova abordagem da vizinhança dos candidatos a microaneurismas. A deteção de microaneurismas
permitiu avaliar também a deteção da retinopatia diabética. O método para a deteção
de microaneurismas mostrou ser competitivo quando comparado com outros métodos, em particular
em imagens de alta resolução. A deteção da retinopatia diabética exibiu um desempenho
semelhante a outros métodos e a especialistas humanos.
Os trabalhos descritos nesta tese mostram ser possível desenvolver uma abordagem fiável e robusta
em espaço de escalas capaz de detetar diferentes estruturas anatómicas e sinais patológicos
da retina.
Além disso, os resultados obtidos mostram que apesar de a pesquisa mais recente concentrarse
em metodologias de aprendizagem máquina, as metodologias de análise espacial apresentam
resultados muito competitivos e tipicamente independentes do equipamento de aquisição das imagens.
As metodologias desenvolvidas nesta tese podem ser importantes na definição de novos
descritores e características, que podem melhorar significativamente o resultado de métodos automatizados
Unsupervised machine learning clustering and data exploration of radio-astronomical images
In this thesis, I demonstrate a novel and efficient unsupervised clustering and data exploration method with the combination of a Self-Organising Map (SOM) and a Convolutional Autoencoder, applied to radio-astronomical images from the Radio Galaxy Zoo (RGZ) dataset. The rapidly increasing volume and complexity of radio-astronomical data have ushered in a new era of big-data astronomy which has increased the demand for Machine Learning (ML) solutions. In this era, the sheer amount of image data produced with modern instruments and has resulted in a significant data deluge. Furthermore, the morphologies of objects captured in these radio-astronomical images are highly complex and challenging to classify conclusively due to their intricate and indiscrete nature. Additionally, major radio-astronomical discoveries are unplanned and found in the unexpected, making unsupervised ML highly desirable by operating with few assumptions and without labelled training data. In this thesis, I developed a novel unsupervised ML approach as a practical solution to these astronomy challenges. Using this system, I demonstrated the use of convolutional autoencoders and SOM’s as a dimensionality reduction method to delineate the complexity and volume of astronomical data. My optimised system shows that the coupling of these methods is a powerful method of data exploration and unsupervised clustering of radio-astronomical images. The results of this thesis show this approach is capable of accurately separating features by complexity on a SOM manifold and unified distance matrix with neighbourhood similarity and hierarchical clustering of the mapped astronomical features. This method provides an effective means to explore the high-level topological relationships of image features and morphology in large datasets automatically with minimal processing time and computational resources. I achieved these capabilities with a new and innovative method of SOM training using the autoencoder compressed latent feature vector representations of radio-astronomical data, rather than raw images. Using this system, I successfully investigated SOM affine transformation invariance and analysed the true nature of rotational effects on this manifold using autoencoder random rotation training augmentations. Throughout this thesis, I present my method as a powerful new approach to data exploration technique and contribution to the field. The speed and effectiveness of this method indicates excellent scalability and holds implications for use on large future surveys, large-scale instruments such as the Square Kilometre Array and in other big-data and complexity analysis applications
Biometric Systems
Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study
Gaze-Based Human-Robot Interaction by the Brunswick Model
We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered