1,633 research outputs found
Computational Analysis of Fundus Images: Rule-Based and Scale-Space Models
Fundus images are one of the most important imaging examinations in modern ophthalmology
because they are simple, inexpensive and, above all, noninvasive.
Nowadays, the acquisition and
storage of highresolution
fundus images is relatively easy and fast. Therefore, fundus imaging
has become a fundamental investigation in retinal lesion detection, ocular health monitoring and
screening programmes. Given the large volume and clinical complexity associated with these images,
their analysis and interpretation by trained clinicians becomes a timeconsuming
task and is
prone to human error. Therefore, there is a growing interest in developing automated approaches
that are affordable and have high sensitivity and specificity. These automated approaches need to
be robust if they are to be used in the general population to diagnose and track retinal diseases. To
be effective, the automated systems must be able to recognize normal structures and distinguish
them from pathological clinical manifestations.
The main objective of the research leading to this thesis was to develop automated systems capable
of recognizing and segmenting retinal anatomical structures and retinal pathological clinical
manifestations associated with the most common retinal diseases. In particular, these automated
algorithms were developed on the premise of robustness and efficiency to deal with the difficulties
and complexity inherent in these images. Four objectives were considered in the analysis of
fundus images. Segmentation of exudates, localization of the optic disc, detection of the midline
of blood vessels, segmentation of the vascular network and detection of microaneurysms.
In addition, we also evaluated the detection of diabetic retinopathy on fundus images using the
microaneurysm detection method. An overview of the state of the art is presented to compare the
performance of the developed approaches with the main methods described in the literature for
each of the previously described objectives. To facilitate the comparison of methods, the state of
the art has been divided into rulebased
methods and machine learningbased
methods.
In the research reported in this paper, rulebased
methods based on image processing methods
were preferred over machine learningbased
methods. In particular, scalespace
methods proved
to be effective in achieving the set goals.
Two different approaches to exudate segmentation were developed. The first approach is based on
scalespace
curvature in combination with the local maximum of a scalespace
blob detector and
dynamic thresholds. The second approach is based on the analysis of the distribution function of
the maximum values of the noise map in combination with morphological operators and adaptive
thresholds. Both approaches perform a correct segmentation of the exudates and cope well with
the uneven illumination and contrast variations in the fundus images.
Optic disc localization was achieved using a new technique called cumulative sum fields, which was
combined with a vascular enhancement method. The algorithm proved to be reliable and efficient,
especially for pathological images. The robustness of the method was tested on 8 datasets.
The detection of the midline of the blood vessels was achieved using a modified corner detector
in combination with binary philtres and dynamic thresholding. Segmentation of the vascular network
was achieved using a new scalespace
blood vessels enhancement method. The developed
methods have proven effective in detecting the midline of blood vessels and segmenting vascular
networks.
The microaneurysm detection method relies on a scalespace
microaneurysm detection and labelling
system. A new approach based on the neighbourhood of the microaneurysms was used
for labelling. Microaneurysm detection enabled the assessment of diabetic retinopathy detection.
The microaneurysm detection method proved to be competitive with other methods, especially with highresolution
images. Diabetic retinopathy detection with the developed microaneurysm
detection method showed similar performance to other methods and human experts.
The results of this work show that it is possible to develop reliable and robust scalespace
methods
that can detect various anatomical structures and pathological features of the retina. Furthermore,
the results obtained in this work show that although recent research has focused on machine learning
methods, scalespace
methods can achieve very competitive results and typically have greater
independence from image acquisition. The methods developed in this work may also be relevant
for the future definition of new descriptors and features that can significantly improve the results
of automated methods.As imagens do fundo do olho são hoje um dos principais exames imagiológicos da oftalmologia
moderna, pela sua simplicidade, baixo custo e acima de tudo pelo seu carácter nãoinvasivo.
A
aquisição e armazenamento de imagens do fundo do olho com alta resolução é também relativamente
simples e rápida. Desta forma, as imagens do fundo do olho são um exame fundamental
na identificação de alterações retinianas, monitorização da saúde ocular, e em programas de rastreio.
Considerando o elevado volume e complexidade clínica associada a estas imagens, a análise
e interpretação das mesmas por clínicos treinados tornase
uma tarefa morosa e propensa a erros
humanos. Assim, há um interesse crescente no desenvolvimento de abordagens automatizadas,
acessíveis em custo, e com uma alta sensibilidade e especificidade. Estas devem ser robustas para
serem aplicadas à população em geral no diagnóstico e seguimento de doenças retinianas. Para
serem eficazes, os sistemas de análise têm que conseguir detetar e distinguir estruturas normais
de sinais patológicos.
O objetivo principal da investigação que levou a esta tese de doutoramento é o desenvolvimento
de sistemas automáticos capazes de detetar e segmentar as estruturas anatómicas da retina, e os
sinais patológicos retinianos associados às doenças retinianas mais comuns. Em particular, estes
algoritmos automatizados foram desenvolvidos segundo as premissas de robustez e eficácia para
lidar com as dificuldades e complexidades inerentes a estas imagens.
Foram considerados quatro objetivos de análise de imagens do fundo do olho. São estes, a segmentação
de exsudados, a localização do disco ótico, a deteção da linha central venosa dos vasos
sanguíneos e segmentação da rede vascular, e a deteção de microaneurismas. De acrescentar que
usando o método de deteção de microaneurismas, avaliouse
também a capacidade de deteção da
retinopatia diabética em imagens do fundo do olho.
Para comparar o desempenho das metodologias desenvolvidas neste trabalho, foi realizado um
levantamento do estado da arte, onde foram considerados os métodos mais relevantes descritos na
literatura para cada um dos objetivos descritos anteriormente. Para facilitar a comparação entre
métodos, o estado da arte foi dividido em metodologias de processamento de imagem e baseadas
em aprendizagem máquina.
Optouse
no trabalho de investigação desenvolvido pela utilização de metodologias de análise espacial
de imagem em detrimento de metodologias baseadas em aprendizagem máquina. Em particular,
as metodologias baseadas no espaço de escalas mostraram ser efetivas na obtenção dos
objetivos estabelecidos.
Para a segmentação de exsudados foram usadas duas abordagens distintas. A primeira abordagem
baseiase
na curvatura em espaço de escalas em conjunto com a resposta máxima local de um detetor
de manchas em espaço de escalas e limiares dinâmicos. A segunda abordagem baseiase
na
análise do mapa de distribuição de ruído em conjunto com operadores morfológicos e limiares
adaptativos. Ambas as abordagens fazem uma segmentação dos exsudados de elevada precisão,
além de lidarem eficazmente com a iluminação nãouniforme
e a variação de contraste presente
nas imagens do fundo do olho. A localização do disco ótico foi conseguida com uma nova técnica
designada por campos de soma acumulativos, combinada com métodos de melhoramento da rede
vascular. O algoritmo revela ser fiável e eficiente, particularmente em imagens patológicas. A robustez
do método foi verificada pela sua avaliação em oito bases de dados. A deteção da linha central
dos vasos sanguíneos foi obtida através de um detetor de cantos modificado em conjunto com
filtros binários e limiares dinâmicos. A segmentação da rede vascular foi conseguida com um novo
método de melhoramento de vasos sanguíneos em espaço de escalas. Os métodos desenvolvidos mostraram ser eficazes na deteção da linha central dos vasos sanguíneos e na segmentação da rede
vascular. Finalmente, o método para a deteção de microaneurismas assenta num formalismo de
espaço de escalas na deteção e na rotulagem dos microaneurismas. Para a rotulagem foi utilizada
uma nova abordagem da vizinhança dos candidatos a microaneurismas. A deteção de microaneurismas
permitiu avaliar também a deteção da retinopatia diabética. O método para a deteção
de microaneurismas mostrou ser competitivo quando comparado com outros métodos, em particular
em imagens de alta resolução. A deteção da retinopatia diabética exibiu um desempenho
semelhante a outros métodos e a especialistas humanos.
Os trabalhos descritos nesta tese mostram ser possível desenvolver uma abordagem fiável e robusta
em espaço de escalas capaz de detetar diferentes estruturas anatómicas e sinais patológicos
da retina.
Além disso, os resultados obtidos mostram que apesar de a pesquisa mais recente concentrarse
em metodologias de aprendizagem máquina, as metodologias de análise espacial apresentam
resultados muito competitivos e tipicamente independentes do equipamento de aquisição das imagens.
As metodologias desenvolvidas nesta tese podem ser importantes na definição de novos
descritores e características, que podem melhorar significativamente o resultado de métodos automatizados
Doctor of Philosophy
dissertationChina’s retail sector has undertaken tremendous transformation since its opening to foreign investment in 1992. Retail transnational corporations have expanded rapidly in this emerging market. Yet relatively little is known about how they have embedded in the Chinese market and expanded spatially and temporally. China has experienced unprecedented urbanization since the onset of economic reform in 1978. Dramatic land use and land cover (LULC) change and urban expansion have taken place in the past three decades. Detailed time-series analysis of LULC change and urban growth in Chinese cities is still scant. This dissertation focuses on the expansion of foreign hypermarket retailers in China and the urban growth in one Chinese city, Suzhou. This research analyzes the penetration strategy and local embeddedness of foreign hypermarket retailers, examines their spatial inequality and dynamics at different geographical levels, and identifies their location determinants through binary logistic regression models. This study applies random forest classification to multitemporal Landsat Thematic Mapper (TM) images of Suzhou for LULC change analysis, employs landscape metrics and Geographic Information System (GIS) analysis to investigate urban growth patterns, and develops global and local logistic regression models to identify determinants of urban growth. The results indicate that spatiotemporal expansion of foreign hypermarket retailers has been largely dictated by the gradual liberalization policy of the Chinese government. Their local embeddedness has been impacted by both home and host economies. Relative gaps in foreign hypermarkets among three macro regions are narrowing while absolute gaps are widening. Provincial foreign hypermarket distribution has shown significant clustering in the Yangtze River Delta since 2005. Their distribution in Shanghai has changed from dispersion to intensified clustering and shown a clear trend of suburbanization. This study confirms that the random forest algorithm can effectively classify the heterogeneous landscape in Suzhou and LULC change has accelerated from 1986 to 2008. Three urban growth types, edge-expansion, infilling, and leapfrog are identified. Compared with the global model, the geographically weighted logistic regression model has overall better goodness-of-fit and provides more insights to spatial variations of the influence of underlying factors on urban growth
Recommended from our members
Distributed localisation algorithm for wireless ad hoc networks of moving nodes
Existing ad hoc network localisation solutions rely either on external location references or network-wide exchange of information and centralised processing and computation of location estimates. Without these, nodes are not able to estimate the relative locations of other nodes within their communication range. This thesis defines a new distributed localisation algorithm for ad hoc networks of moving nodes. The Relative Neighbour Localisation (RNL) algorithm works without any external localisation signal or systems and does not assume centralised information processing. The idea behind the location estimates produced by the RNL algorithm is the relationship between the relative locations of two nodes, their mobility parameters and the signal strengths measured between them. The proposed algorithm makes use of the data available to each node to produce a location estimate. The signal strength each node is capable of measuring is used as one algorithm input. The other input is the velocity vector of the neighbouring node, composed of its speed and direction of movement, which each node is assumed to periodically broadcast. The relationship between the signal strength and the mobility parameters on one, and the relative location on the other side can be analytically formulated in an ideal case. The limitations of a realistic scenario complicate this relationship, making it very difficult to formulate analytically. An empirical approach is thus used. The angle and the distance estimates are individually computed, together forming a two-dimensional location estimate. The performance of the algorithm was analysed in detail using simulation, showing a median estimate error of under 10m, and its application was tested through design and evaluation of a distributed sensing coverage algorithm, showing RNL location estimates can provide 90% of the coverage achievable with true locations being known
Multiscale Centerline Extraction Based on Regression and Projection onto the Set of Elongated Structures
Automatically extracting linear structures from images is a fundamental low-level vision problem with numerous applications in different domains. Centerline detection and radial estimation are the first crucial steps in most Computer Vision pipelines aiming to reconstruct linear structures. Existing techniques rely either on hand-crafted filters, designed to respond to ideal profiles of the linear structure, or on classification-based approaches, which automatically learn to detect centerline points from data. Hand-crafted methods are the most accurate when the content of the image fulfills the ideal model they rely on. However, they lose accuracy in the presence of noise or when the linear structures are irregular and deviate from the ideal case. Machine learning techniques can alleviate this problem. However, they are mainly based on a classification framework. In this thesis, we show that classification is not the best formalism to solve the centerline detection problem. In fact, since the appearance of a centerline point is very similar to the points immediately next to it, the output of a classifier trained to detect centerlines presents low localization accuracy and double responses on the body of the linear structure. To solve this problem, we propose a regression-based formulation for centerline detection. We rely on the distance transform of the centerlines to automatically learn a function whose local maxima correspond to centerline points. The output of our method can be used to directly estimate the location of the centerline, by a simple Non-Maximum Suppression operation, or it can be used as input to a tracing pipeline to reconstruct the graph of the linear structure. In both cases, our method gives more accurate results than state-of-the-art techniques on challenging 2D and 3D datasets. Our method relies on features extracted by means of convolutional filters. In order to process large amount of data efficiently, we introduce a general filter bank approximation scheme. In particular, we show that a generic filter bank can be approximated by a linear combination of a smaller set of separable filters. Thanks to this method, we can greatly reduce the computation time of the convolutions, without loss of accuracy. Our approach is general, and we demonstrate its effectiveness by applying it to different Computer Vision problems, such as linear structure detection and image classification with Convolutional Neural Networks. We further improve our regression-based method for centerline detection by taking advantage of contextual image information. We adopt a multiscale iterative regression approach to efficiently include a large image context in our algorithm. Compared to previous approaches, we use context both in the spatial domain and in the radial one. In this way, our method is also able to return an accurate estimation of the radii of the linear structures. The idea of using regression can also be beneficial for solving other related Computer Vision problems. For example, we show an improvement compared to previous works when applying it to boundary and membrane detection. Finally, we focus on the particular geometric properties of the linear structures. We observe that most methods for detecting them treat each pixel independently and do not model the strong relation that exists between neighboring pixels. As a consequence, their output is geometrically inconsistent. In this thesis, we address this problem by considering the projection of the score map returned by our regressor onto the set of all geometrically admissible ground truth images. We propose an efficient patch-wise approximation scheme to compute the projection. Moreover, we provide conditions under which the projection is exact. We demonstrate the advantage of our method by applying it to four different problems
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
- …