32 research outputs found

    Evaluation of fractal dimension effectiveness for damage detection in retinal background

    Full text link
    [EN] This work investigates the characterization of bright lesions in retinal fundus images using texture analysis techniques. Exudates and drusen are evidences of retinal damage in diabetic retinopathy (DR) and age-related macular degeneration (AMD) respectively. An automatic detection of pathological tissues could make possible an early detection of these diseases. In this work, fractal analysis is explored in order to discriminate between pathological and healthy retinal texture. After a deep preprocessing step, in which spatial and colour normalization are performed, the fractal dimension is extracted locally by computing the Hurst exponent (H) along different directions. The greyscale image is described by the increments of the fractional Brownian motion model and the H parameter is computed by linear regression in the frequency domain. The ability of fractal dimension to detect pathological tissues is demonstrated using a home-made system, based on fractal analysis and Support Vector Machine, able to achieve around a 70% and 83% of accuracy in E-OPHTHA and DIARETDB1 public databases respectively. In a second experiment, the fractal descriptor is combined with texture information, extracted by the Local Binary Patterns, improving the bright lesion detection. Accuracy, sensitivity and specificity values higher than 89%, 80% and 90% respectively suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in the automatic detection of DR and AMD.This paper was supported by the European Union's Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT-2016-2017, 732613]. In addition, this work was partially funded by the Ministerio de Economia y Competitividad of Spain, Project SICAP [DPI2016-77869-C2-1-R]. The work of Adrian Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Colomer, A.; Naranjo Ornedo, V.; Janvier, T.; Mossi García, JM. (2018). Evaluation of fractal dimension effectiveness for damage detection in retinal background. Journal of Computational and Applied Mathematics. 337:341-353. https://doi.org/10.1016/j.cam.2018.01.005S34135333

    Fundus image analysis for automatic screening of ophthalmic pathologies

    Full text link
    En los ultimos años el número de casos de ceguera se ha reducido significativamente. A pesar de este hecho, la Organización Mundial de la Salud estima que un 80% de los casos de pérdida de visión (285 millones en 2010) pueden ser evitados si se diagnostican en sus estadios más tempranos y son tratados de forma efectiva. Para cumplir esta propuesta se pretende que los servicios de atención primaria incluyan un seguimiento oftalmológico de sus pacientes así como fomentar campañas de cribado en centros proclives a reunir personas de alto riesgo. Sin embargo, estas soluciones exigen una alta carga de trabajo de personal experto entrenado en el análisis de los patrones anómalos propios de cada enfermedad. Por lo tanto, el desarrollo de algoritmos para la creación de sistemas de cribado automáticos juga un papel vital en este campo. La presente tesis persigue la identificacion automática del daño retiniano provocado por dos de las patologías más comunes en la sociedad actual: la retinopatía diabética (RD) y la degenaración macular asociada a la edad (DMAE). Concretamente, el objetivo final de este trabajo es el desarrollo de métodos novedosos basados en la extracción de características de la imagen de fondo de ojo y clasificación para discernir entre tejido sano y patológico. Además, en este documento se proponen algoritmos de pre-procesado con el objetivo de normalizar la alta variabilidad existente en las bases de datos publicas de imagen de fondo de ojo y eliminar la contribución de ciertas estructuras retinianas que afectan negativamente en la detección del daño retiniano. A diferencia de la mayoría de los trabajos existentes en el estado del arte sobre detección de patologías en imagen de fondo de ojo, los métodos propuestos a lo largo de este manuscrito evitan la necesidad de segmentación de las lesiones o la generación de un mapa de candidatos antes de la fase de clasificación. En este trabajo, Local binary patterns, perfiles granulométricos y la dimensión fractal se aplican de manera local para extraer información de textura, morfología y tortuosidad de la imagen de fondo de ojo. Posteriormente, esta información se combina de diversos modos formando vectores de características con los que se entrenan avanzados métodos de clasificación formulados para discriminar de manera óptima entre exudados, microaneurismas, hemorragias y tejido sano. Mediante diversos experimentos, se valida la habilidad del sistema propuesto para identificar los signos más comunes de la RD y DMAE. Para ello se emplean bases de datos públicas con un alto grado de variabilidad sin exlcuir ninguna imagen. Además, la presente tesis también cubre aspectos básicos del paradigma de deep learning. Concretamente, se presenta un novedoso método basado en redes neuronales convolucionales (CNNs). La técnica de transferencia de conocimiento se aplica mediante el fine-tuning de las arquitecturas de CNNs más importantes en el estado del arte. La detección y localización de exudados mediante redes neuronales se lleva a cabo en los dos últimos experimentos de esta tesis doctoral. Cabe destacar que los resultados obtenidos mediante la extracción de características "manual" y posterior clasificación se comparan de forma objetiva con las predicciones obtenidas por el mejor modelo basado en CNNs. Los prometedores resultados obtenidos en esta tesis y el bajo coste y portabilidad de las cámaras de adquisión de imagen de retina podrían facilitar la incorporación de los algoritmos desarrollados en este trabajo en un sistema de cribado automático que ayude a los especialistas en la detección de patrones anomálos característicos de las dos enfermedades bajo estudio: RD y DMAE.In last years, the number of blindness cases has been significantly reduced. Despite this promising news, the World Health Organisation estimates that 80% of visual impairment (285 million cases in 2010) could be avoided if diagnosed and treated early. To accomplish this purpose, eye care services need to be established in primary health and screening campaigns should be a common task in centres with people at risk. However, these solutions entail a high workload for trained experts in the analysis of the anomalous patterns of each eye disease. Therefore, the development of algorithms for automatic screening system plays a vital role in this field. This thesis focuses on the automatic identification of the retinal damage provoked by two of the most common pathologies in the current society: diabetic retinopathy (DR) and age-related macular degeneration (AMD). Specifically, the final goal of this work is to develop novel methods, based on fundus image description and classification, to characterise the healthy and abnormal tissue in the retina background. In addition, pre-processing algorithms are proposed with the aim of normalising the high variability of fundus images and removing the contribution of some retinal structures that could hinder in the retinal damage detection. In contrast to the most of the state-of-the-art works in damage detection using fundus images, the methods proposed throughout this manuscript avoid the necessity of lesion segmentation or the candidate map generation before the classification stage. Local binary patterns, granulometric profiles and fractal dimension are locally computed to extract texture, morphological and roughness information from retinal images. Different combinations of this information feed advanced classification algorithms formulated to optimally discriminate exudates, microaneurysms, haemorrhages and healthy tissues. Through several experiments, the ability of the proposed system to identify DR and AMD signs is validated using different public databases with a large degree of variability and without image exclusion. Moreover, this thesis covers the basics of the deep learning paradigm. In particular, a novel approach based on convolutional neural networks is explored. The transfer learning technique is applied to fine-tune the most important state-of-the-art CNN architectures. Exudate detection and localisation tasks using neural networks are carried out in the last two experiments of this thesis. An objective comparison between the hand-crafted feature extraction and classification process and the prediction models based on CNNs is established. The promising results of this PhD thesis and the affordable cost and portability of retinal cameras could facilitate the further incorporation of the developed algorithms in a computer-aided diagnosis (CAD) system to help specialists in the accurate detection of anomalous patterns characteristic of the two diseases under study: DR and AMD.En els últims anys el nombre de casos de ceguera s'ha reduït significativament. A pesar d'este fet, l'Organització Mundial de la Salut estima que un 80% dels casos de pèrdua de visió (285 milions en 2010) poden ser evitats si es diagnostiquen en els seus estadis més primerencs i són tractats de forma efectiva. Per a complir esta proposta es pretén que els servicis d'atenció primària incloguen un seguiment oftalmològic dels seus pacients així com fomentar campanyes de garbellament en centres regentats per persones d'alt risc. No obstant això, estes solucions exigixen una alta càrrega de treball de personal expert entrenat en l'anàlisi dels patrons anòmals propis de cada malaltia. Per tant, el desenrotllament d'algoritmes per a la creació de sistemes de garbellament automàtics juga un paper vital en este camp. La present tesi perseguix la identificació automàtica del dany retiniano provocat per dos de les patologies més comunes en la societat actual: la retinopatia diabètica (RD) i la degenaración macular associada a l'edat (DMAE) . Concretament, l'objectiu final d'este treball és el desenrotllament de mètodes novedodos basats en l'extracció de característiques de la imatge de fons d'ull i classificació per a discernir entre teixit sa i patològic. A més, en este document es proposen algoritmes de pre- processat amb l'objectiu de normalitzar l'alta variabilitat existent en les bases de dades publiques d'imatge de fons d'ull i eliminar la contribució de certes estructures retinianas que afecten negativament en la detecció del dany retiniano. A diferència de la majoria dels treballs existents en l'estat de l'art sobre detecció de patologies en imatge de fons d'ull, els mètodes proposats al llarg d'este manuscrit eviten la necessitat de segmentació de les lesions o la generació d'un mapa de candidats abans de la fase de classificació. En este treball, Local binary patterns, perfils granulometrics i la dimensió fractal s'apliquen de manera local per a extraure informació de textura, morfologia i tortuositat de la imatge de fons d'ull. Posteriorment, esta informació es combina de diversos modes formant vectors de característiques amb els que s'entrenen avançats mètodes de classificació formulats per a discriminar de manera òptima entre exsudats, microaneurismes, hemorràgies i teixit sa. Per mitjà de diversos experiments, es valida l'habilitat del sistema proposat per a identificar els signes més comuns de la RD i DMAE. Per a això s'empren bases de dades públiques amb un alt grau de variabilitat sense exlcuir cap imatge. A més, la present tesi també cobrix aspectes bàsics del paradigma de deep learning. Concretament, es presenta un nou mètode basat en xarxes neuronals convolucionales (CNNs) . La tècnica de transferencia de coneixement s'aplica per mitjà del fine-tuning de les arquitectures de CNNs més importants en l'estat de l'art. La detecció i localització d'exudats per mitjà de xarxes neuronals es du a terme en els dos últims experiments d'esta tesi doctoral. Cal destacar que els resultats obtinguts per mitjà de l'extracció de característiques "manual" i posterior classificació es comparen de forma objectiva amb les prediccions obtingudes pel millor model basat en CNNs. Els prometedors resultats obtinguts en esta tesi i el baix cost i portabilitat de les cambres d'adquisión d'imatge de retina podrien facilitar la incorporació dels algoritmes desenrotllats en este treball en un sistema de garbellament automàtic que ajude als especialistes en la detecció de patrons anomálos característics de les dos malalties baix estudi: RD i DMAE.Colomer Granero, A. (2018). Fundus image analysis for automatic screening of ophthalmic pathologies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/99745TESI

    Computer analysis for registration and change detection of retinal images

    Get PDF
    The current system of retinal screening is manual; It requires repetitive examination of a large number of retinal images by professional optometrists who try to identify the presence of abnormalities. As a result of the manual and repetitive nature of such examination, there is a possibility for error in diagnosis, in particular in the case when the progression of disease is slight. As the sight is an extremely important sense, any tools which can improve the probability of detecting disease could be considered beneficial. Moreover, the early detection of ophthalmic anomalies can prevent the impairment or loss of vision. The study reported in this Thesis investigates computer vision and image processing techniques to analyse retinal images automatically, in particular for diabetic retinopathy disease which causes blindness. This analysis aims to automate registration to detect differences between a pair of images taken at different times. These differences could be the result of disease progression or, occasionally, simply the presence of artefacts. The resulting methods from this study, will be therefore used to build a software tool to aid the diagnosis process undertaken by ophthalmologists. The research also presents a number of algorithms for the enhancement and visualisation of information present within the retinal images, which under normal situations would be invisible to the viewer; For instance, in the case of slight disease progression or in the case of similar levels of contrast between images, making it difficult for the human eye to see or to distinguish any variations. This study also presents a number of developed methods for computer analysis of retinal images. These methods include a colour distance measurement algorithm, detection of bifurcations and their cross points in retina, image registration, and change detection. The overall analysis in this study can be classified to four stages: image enhancement, landmarks detection, registration, and change detection. The study has showed that the methods developed can achieve automatic, efficient, accurate, and robust implementation

    Deep learning analysis of eye fundus images to support medical diagnosis

    Get PDF
    Machine learning techniques have been successfully applied to support medical decision making of cancer, heart diseases and degenerative diseases of the brain. In particular, deep learning methods have been used for early detection of abnormalities in the eye that could improve the diagnosis of different ocular diseases, especially in developing countries, where there are major limitations to access to specialized medical treatment. However, the early detection of clinical signs such as blood vessel, optic disc alterations, exudates, hemorrhages, drusen, and microaneurysms presents three main challenges: the ocular images can be affected by noise artifact, the features of the clinical signs depend specifically on the acquisition source, and the combination of local signs and grading disease label is not an easy task. This research approaches the problem of combining local signs and global labels of different acquisition sources of medical information as a valuable tool to support medical decision making in ocular diseases. Different models for different eye diseases were developed. Four models were developed using eye fundus images: for DME, it was designed a two-stages model that uses a shallow model to predict an exudate binary mask. Then, the binary mask is stacked with the raw fundus image into a 4-channel array as an input of a deep convolutional neural network for diabetic macular edema diagnosis; for glaucoma, it was developed three deep learning models. First, it was defined a deep learning model based on three-stages that contains an initial stage for automatically segment two binary masks containing optic disc and physiological cup segmentation, followed by an automatic morphometric features extraction stage from previous segmentations, and a final classification stage that supports the glaucoma diagnosis with intermediate medical information. Two late-data-fusion methods that fused morphometric features from cartesian and polar segmentation of the optic disc and physiological cup with features extracted from raw eye fundus images. On the other hand, two models were defined using optical coherence tomography. First, a customized convolutional neural network termed as OCT-NET to extract features from OCT volumes to classify DME, DR-DME and AMD conditions. In addition, this model generates images with highlighted local information about the clinical signs, and it estimates the number of slides inside a volume with local abnormalities. Finally, a 3D-Deep learning model that uses OCT volumes as an input to estimate the retinal thickness map useful to grade AMD. The methods were systematically evaluated using ten free public datasets. The methods were compared and validated against other state-of-the-art algorithms and the results were also qualitatively evaluated by ophthalmology experts from Fundación Oftalmológica Nacional. In addition, the proposed methods were tested as a diagnosis support tool of diabetic macular edema, glaucoma, diabetic retinopathy and age-related macular degeneration using two different ocular imaging representations. Thus, we consider that this research could be potentially a big step in building telemedicine tools that could support medical personnel for detecting ocular diseases using eye fundus images and optical coherence tomography.Las técnicas de aprendizaje automático se han aplicado con éxito para apoyar la toma de decisiones médicas sobre el cáncer, las enfermedades cardíacas y las enfermedades degenerativas del cerebro. En particular, se han utilizado métodos de aprendizaje profundo para la detección temprana de anormalidades en el ojo que podrían mejorar el diagnóstico de diferentes enfermedades oculares, especialmente en países en desarrollo, donde existen grandes limitaciones para acceder a tratamiento médico especializado. Sin embargo, la detección temprana de signos clínicos como vasos sanguíneos, alteraciones del disco óptico, exudados, hemorragias, drusas y microaneurismas presenta tres desafíos principales: las imágenes oculares pueden verse afectadas por artefactos de ruido, las características de los signos clínicos dependen específicamente de fuente de adquisición, y la combinación de signos locales y clasificación de la enfermedad no es una tarea fácil. Esta investigación aborda el problema de combinar signos locales y etiquetas globales de diferentes fuentes de adquisición de información médica como una herramienta valiosa para apoyar la toma de decisiones médicas en enfermedades oculares. Se desarrollaron diferentes modelos para diferentes enfermedades oculares. Se desarrollaron cuatro modelos utilizando imágenes de fondo de ojo: para DME, se diseñó un modelo de dos etapas que utiliza un modelo superficial para predecir una máscara binaria de exudados. Luego, la máscara binaria se apila con la imagen de fondo de ojo original en una matriz de 4 canales como entrada de una red neuronal convolucional profunda para el diagnóstico de edema macular diabético; para el glaucoma, se desarrollaron tres modelos de aprendizaje profundo. Primero, se definió un modelo de aprendizaje profundo basado en tres etapas que contiene una etapa inicial para segmentar automáticamente dos máscaras binarias que contienen disco óptico y segmentación fisiológica de la copa, seguido de una etapa de extracción de características morfométricas automáticas de segmentaciones anteriores y una etapa de clasificación final que respalda el diagnóstico de glaucoma con información médica intermedia. Dos métodos de fusión de datos tardíos que fusionaron características morfométricas de la segmentación cartesiana y polar del disco óptico y la copa fisiológica con características extraídas de imágenes de fondo de ojo crudo. Por otro lado, se definieron dos modelos mediante tomografía de coherencia óptica. Primero, una red neuronal convolucional personalizada denominada OCT-NET para extraer características de los volúmenes OCT para clasificar las condiciones DME, DR-DME y AMD. Además, este modelo genera imágenes con información local resaltada sobre los signos clínicos, y estima el número de diapositivas dentro de un volumen con anomalías locales. Finalmente, un modelo de aprendizaje 3D-Deep que utiliza volúmenes OCT como entrada para estimar el mapa de espesor retiniano útil para calificar AMD. Los métodos se evaluaron sistemáticamente utilizando diez conjuntos de datos públicos gratuitos. Los métodos se compararon y validaron con otros algoritmos de vanguardia y los resultados también fueron evaluados cualitativamente por expertos en oftalmología de la Fundación Oftalmológica Nacional. Además, los métodos propuestos se probaron como una herramienta de diagnóstico de edema macular diabético, glaucoma, retinopatía diabética y degeneración macular relacionada con la edad utilizando dos representaciones de imágenes oculares diferentes. Por lo tanto, consideramos que esta investigación podría ser potencialmente un gran paso en la construcción de herramientas de telemedicina que podrían ayudar al personal médico a detectar enfermedades oculares utilizando imágenes de fondo de ojo y tomografía de coherencia óptica.Doctorad

    Image Color Correction, Enhancement, and Editing

    Get PDF
    This thesis presents methods and approaches to image color correction, color enhancement, and color editing. To begin, we study the color correction problem from the standpoint of the camera's image signal processor (ISP). A camera's ISP is hardware that applies a series of in-camera image processing and color manipulation steps, many of which are nonlinear in nature, to render the initial sensor image to its final photo-finished representation saved in the 8-bit standard RGB (sRGB) color space. As white balance (WB) is one of the major procedures applied by the ISP for color correction, this thesis presents two different methods for ISP white balancing. Afterwards, we discuss another scenario of correcting and editing image colors, where we present a set of methods to correct and edit WB settings for images that have been improperly white-balanced by the ISP. Then, we explore another factor that has a significant impact on the quality of camera-rendered colors, in which we outline two different methods to correct exposure errors in camera-rendered images. Lastly, we discuss post-capture auto color editing and manipulation. In particular, we propose auto image recoloring methods to generate different realistic versions of the same camera-rendered image with new colors. Through extensive evaluations, we demonstrate that our methods provide superior solutions compared to existing alternatives targeting color correction, color enhancement, and color editing

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection
    corecore