6 research outputs found

    Flexible Hardware Architectures for Retinal Image Analysis

    Get PDF
    RÉSUMÉ Des millions de personnes autour du monde sont touchĂ©es par le diabĂšte. Plusieurs complications oculaires telle que la rĂ©tinopathie diabĂ©tique sont causĂ©es par le diabĂšte, ce qui peut conduire Ă  une perte de vision irrĂ©versible ou mĂȘme la cĂ©citĂ© si elles ne sont pas traitĂ©es. Des examens oculaires complets et rĂ©guliers par les ophtalmologues sont nĂ©cessaires pour une dĂ©tection prĂ©coce des maladies et pour permettre leur traitement. Comme solution prĂ©ventive, un protocole de dĂ©pistage impliquant l'utilisation d'images numĂ©riques du fond de l'Ɠil a Ă©tĂ© adoptĂ©. Cela permet aux ophtalmologistes de surveiller les changements sur la rĂ©tine pour dĂ©tecter toute prĂ©sence d'une maladie oculaire. Cette solution a permis d'obtenir des examens rĂ©guliers, mĂȘme pour les populations des rĂ©gions Ă©loignĂ©es et dĂ©favorisĂ©es. Avec la grande quantitĂ© d'images rĂ©tiniennes obtenues, des techniques automatisĂ©es pour les traiter sont devenues indispensables. Les techniques automatisĂ©es de dĂ©tection des maladies des yeux ont Ă©tĂ© largement abordĂ©es par la communautĂ© scientifique. Les techniques dĂ©veloppĂ©es ont atteint un haut niveau de maturitĂ©, ce qui a permis entre autre le dĂ©ploiement de solutions en tĂ©lĂ©mĂ©decine. Dans cette thĂšse, nous abordons le problĂšme du traitement de volumes Ă©levĂ©s d'images rĂ©tiniennes dans un temps raisonnable dans un contexte de dĂ©pistage en tĂ©lĂ©mĂ©decine. Ceci est requis pour permettre l'utilisation pratique des techniques dĂ©veloppĂ©es dans le contexte clinique. Dans cette thĂšse, nous nous concentrons sur deux Ă©tapes du pipeline de traitement des images rĂ©tiniennes. La premiĂšre Ă©tape est l'Ă©valuation de la qualitĂ© de l'image rĂ©tinienne. La deuxiĂšme Ă©tape est la segmentation des vaisseaux sanguins rĂ©tiniens. L’évaluation de la qualitĂ© des images rĂ©tinienne aprĂšs acquisition est une tĂąche primordiale au bon fonctionnement de tout systĂšme de traitement automatique des images de la rĂ©tine. Le rĂŽle de cette Ă©tape est de classifier les images acquises selon leurs qualitĂ©s, et demander une nouvelle acquisition en cas d’image de mauvaise qualitĂ©. Plusieurs algorithmes pour Ă©valuer la qualitĂ© des images rĂ©tiniennes ont Ă©tĂ© proposĂ©s dans la littĂ©rature. Cependant, mĂȘme si l'accĂ©lĂ©ration de cette tĂąche est requise en particulier pour permettre la crĂ©ation de systĂšmes mobiles de capture d'images rĂ©tiniennes, ce sujet n'a pas encore Ă©tĂ© abordĂ© dans la littĂ©rature. Dans cette thĂšse, nous ciblons un algorithme qui calcule les caractĂ©ristiques des images pour permettre leur classification en mauvaise, moyenne ou bonne qualitĂ©. Nous avons identifiĂ© le calcul des caractĂ©ristiques de l'image comme une tĂąche rĂ©pĂ©titive qui nĂ©cessite une accĂ©lĂ©ration. Nous nous sommes intĂ©ressĂ©s plus particuliĂšrement Ă  l’accĂ©lĂ©ration de l’algorithme d’encodage Ă  longueur de sĂ©quence (Run-Length Matrix – RLM). Nous avons proposĂ© une premiĂšre implĂ©mentation complĂštement logicielle mise en Ɠuvre sous forme d’un systĂšme embarquĂ© basĂ© sur la technologie Zynq de Xilinx. Pour accĂ©lĂ©rer le calcul des caractĂ©ristiques, nous avons conçu un co-processeur capable de calculer les caractĂ©ristiques en parallĂšle implĂ©mentĂ© sur la logique programmable du FPGA Zynq. Nous avons obtenu une accĂ©lĂ©ration de 30,1 × pour la tĂąche de calcul des caractĂ©ristiques de l’algorithme RLM par rapport Ă  son implĂ©mentation logicielle sur la plateforme Zynq. La segmentation des vaisseaux sanguins rĂ©tiniens est une tĂąche clĂ© dans le pipeline du traitement des images de la rĂ©tine. Les vaisseaux sanguins et leurs caractĂ©ristiques sont de bons indicateurs de la santĂ© de la rĂ©tine. En outre, leur segmentation peut Ă©galement aider Ă  segmenter les lĂ©sions rouges, indicatrices de la rĂ©tinopathie diabĂ©tique. Plusieurs techniques de segmentation des vaisseaux sanguins rĂ©tiniens ont Ă©tĂ© proposĂ©es dans la littĂ©rature. Des architectures matĂ©rielles ont Ă©galement Ă©tĂ© proposĂ©es pour accĂ©lĂ©rer certaines de ces techniques. Les architectures existantes manquent de performances et de flexibilitĂ© de programmation, notamment pour les images de haute rĂ©solution. Dans cette thĂšse, nous nous sommes intĂ©ressĂ©s Ă  deux techniques de segmentation du rĂ©seau vasculaire rĂ©tinien, la technique du filtrage adaptĂ© et la technique des opĂ©rateurs de ligne. La technique de filtrage adaptĂ© a Ă©tĂ© ciblĂ©e principalement en raison de sa popularitĂ©. Pour cette technique, nous avons proposĂ© deux architectures diffĂ©rentes, une architecture matĂ©rielle personnalisĂ©e mise en Ɠuvre sur FPGA et une architecture basĂ©e sur un ASIP. L'architecture matĂ©rielle personnalisĂ©e a Ă©tĂ© optimisĂ©e en termes de surface et de dĂ©bit de traitement pour obtenir des performances supĂ©rieures par rapport aux implĂ©mentations existantes dans la littĂ©rature. Cette implĂ©mentation est plus efficace que toutes les implĂ©mentations existantes en termes de dĂ©bit. Pour l'architecture basĂ©e sur un processeur Ă  jeu d’instructions spĂ©cialisĂ© (Application-Specific Instruction-set Processor – ASIP), nous avons identifiĂ© deux goulets d'Ă©tranglement liĂ©s Ă  l'accĂšs aux donnĂ©es et Ă  la complexitĂ© des calculs de l'algorithme. Nous avons conçu des instructions spĂ©cifiques ajoutĂ©es au chemin de donnĂ©es du processeur. L'ASIP a Ă©tĂ© rendu 7.7 × plus rapide par rapport Ă  son architecture de base. La deuxiĂšme technique pour la segmentation des vaisseaux sanguins est l'algorithme dĂ©tecteur de ligne multi-Ă©chelle (Multi-Scale Ligne Detector – MSLD). L'algorithme MSLD est choisi en raison de ses performances et de son potentiel Ă  dĂ©tecter les petits vaisseaux sanguins. Cependant, l'algorithme fonctionne en multi-Ă©chelle, ce qui rend l’algorithme gourmand en mĂ©moire. Pour rĂ©soudre ce problĂšme et permettre l'accĂ©lĂ©ration de son exĂ©cution, nous avons proposĂ© un algorithme efficace en terme de mĂ©moire, conçu et implĂ©mentĂ© sur FPGA. L'architecture proposĂ©e a rĂ©duit de façon drastique les exigences de l’algorithme en terme de mĂ©moire en rĂ©utilisant les calculs et la co-conception logicielle/matĂ©rielle. Les deux architectures matĂ©rielles proposĂ©es pour la segmentation du rĂ©seau vasculaire rĂ©tinien ont Ă©tĂ© rendues flexibles pour pouvoir traiter des images de basse et de haute rĂ©solution. Ceci a Ă©tĂ© rĂ©alisĂ© par le dĂ©veloppement d'un compilateur spĂ©cifique capable de gĂ©nĂ©rer une description HDL de bas niveau de l'algorithme Ă  partir d'un ensemble de paramĂštres. Le compilateur nous a permis d’optimiser les performances et le temps de dĂ©veloppement. Dans cette thĂšse, nous avons introduit deux architectures qui sont, au meilleur de nos connaissances, les seules capables de traiter des images Ă  la fois de basse et de haute rĂ©solution.----------ABSTRACT Millions of people all around the world are affected by diabetes. Several ocular complications such as diabetic retinopathy are caused by diabetes, which can lead to irreversible vision loss or even blindness if not treated. Regular comprehensive eye exams by eye doctors are required to detect the diseases at earlier stages and permit their treatment. As a preventing solution, a screening protocol involving the use of digital fundus images was adopted. This allows eye doctors to monitor changes in the retina to detect any presence of eye disease. This solution made regular examinations widely available, even to populations in remote and underserved areas. With the resulting large amount of retinal images, automated techniques to process them are required. Automated eye detection techniques are largely addressed by the research community, and now they reached a high level of maturity, which allows the deployment of telemedicine solutions. In this thesis, we are addressing the problem of processing a high volume of retinal images in a reasonable time. This is mandatory to allow the practical use of the developed techniques in a clinical context. In this thesis, we focus on two steps of the retinal image pipeline. The first step is the retinal image quality assessment. The second step is the retinal blood vessel segmentation. The evaluation of the quality of the retinal images after acquisition is a primary task for the proper functioning of any automated retinal image processing system. The role of this step is to classify the acquired images according to their quality, which will allow an automated system to request a new acquisition in case of poor quality image. Several algorithms to evaluate the quality of retinal images were proposed in the literature. However, even if the acceleration of this task is required, especially to allow the creation of mobile systems for capturing retinal images, this task has not yet been addressed in the literature. In this thesis, we target an algorithm that computes image features to allow their classification to bad, medium or good quality. We identified the computation of image features as a repetitive task that necessitates acceleration. We were particularly interested in accelerating the Run-Length Matrix (RLM) algorithm. We proposed a first fully software implementation in the form of an embedded system based on Xilinx's Zynq technology. To accelerate the features computation, we designed a co-processor able to compute the features in parallel, implemented on the programmable logic of the Zynq FPGA. We achieved an acceleration of 30.1× over its software implementation for the features computation part of the RLM algorithm. Retinal blood vessel segmentation is a key task in the pipeline of retinal image processing. Blood vessels and their characteristics are good indicators of retina health. In addition, their segmentation can also help to segment the red lesions, indicators of diabetic retinopathy. Several techniques have been proposed in the literature to segment retinal blood vessels. Hardware architectures have also been proposed to accelerate blood vessel segmentation. The existing architectures lack in terms of performance and programming flexibility, especially for high resolution images. In this thesis, we targeted two techniques, matched filtering and line operators. The matched filtering technique was targeted mainly because of its popularity. For this technique, we proposed two different architectures, a custom hardware architecture implemented on FPGA, and an Application Specific Instruction-set Processor (ASIP) based architecture. The custom hardware architecture area and timing were optimized to achieve higher performances in comparison to existing implementations. Our custom hardware implementation outperforms all existing implementations in terms of throughput. For the ASIP based architecture, we identified two bottlenecks related to data access and computation intensity of the algorithm. We designed two specific instructions added to the processor datapath. The ASIP was made 7.7× more efficient in terms of execution time compared to its basic architecture. The second technique for blood vessel segmentation is the Multi-Scale Line Detector (MSLD) algorithm. The MSLD algorithm is selected because of its performance and its potential to detect small blood vessels. However, the algorithm works at multiple scales which makes it memory intensive. To solve this problem and allow the acceleration of its execution, we proposed a memory-efficient algorithm designed and implemented on FPGA. The proposed architecture reduces drastically the memory requirements of the algorithm by reusing the computations and SW/HW co-design. The two hardware architectures proposed for retinal blood vessel segmentation were made flexible to be able to process low and high resolution images. This was achieved by the development of a specific compiler able to generate low-level HDL descriptions of the algorithm from a set of the algorithm parameters. The compiler enabled us to optimize performance and development time. In this thesis, we introduce two novel architectures which are, to the best of our knowledge, the only ones able to process both low and high resolution images

    A Feasibility Study to Develop an Integrated Diabetic Retinopathy Screening Programme in the Western Province of Sri Lanka

    Get PDF
    Background: Diabetic retinopathy (DR) is a common microvascular complication of diabetes mellitus which can lead to sight loss, if not detected and treated in time. Objectives: This study aimed to assess the feasibility of integrating DR screening (DRS) services into free public sector health care in Sri Lanka. The objectives were to identify barriers to access DRS, to determine the most appropriate DRS modality and to assess acceptability of a health educational intervention (HEI). Methods: The study was conducted using mixed methods. The barriers were assessed through systematic literature search and qualitative studies. A systematic literature review and meta-analysis was conducted to assess the diagnostic accuracy of DRS using digital retinal imaging. Based on the results of the formative stages, a local context specific DRS modality was defined and validated at a tertiary level medical clinic by trained physician graders. Finally, a HEI was adapted and acceptability was assessed using participatory approach. Results: The formative studies revealed that lack of knowledge and awareness on DR, lack of skilled human resources and DRS imaging infrastructure as the main barriers. In the meta-analysis, highest sensitivity was observed in mydriatic more than two field strategy (92%, 95% CI 90-94%). In the validation study, sensitivity of the defined referable DR was 88.7% for grader 1 and 92.5% for grader 2, using mydriatic imaging. The specificity was 94.9% for grader 1 and 96.4% for grader 2. The overall acceptability of the HEI material was satisfactory. Conclusions: Knowing the barriers to access DRS is a pre-requisite in development of a DRS program. Non-mydriatic 2-field strategy is a more pragmatic approach in implementing DRS programs in low income non-ophthalmic settings, with dilatation of pupils of those who have ungradable images. The process of adapting HEI was not simply translation into local language, instead a tailored approach for the local context

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Telemedicine

    Get PDF
    Telemedicine is a rapidly evolving field as new technologies are implemented for example for the development of wireless sensors, quality data transmission. Using the Internet applications such as counseling, clinical consultation support and home care monitoring and management are more and more realized, which improves access to high level medical care in underserved areas. The 23 chapters of this book present manifold examples of telemedicine treating both theoretical and practical foundations and application scenarios

    Aspectos do rastreamento do glaucoma auxiliados por técnicas automatizadas em imagens com menor qualidade do disco óptico

    Get PDF
    O glaucoma Ă© uma neuropatia Ăłptica cuja progressĂŁo pode levar a cegueira. Representa a principal causa de perda visual de carĂĄter irreversĂ­vel em todo o mundo para homens e mulheres. A detecção precoce atravĂ©s de programas de rastreamento feita por especialistas Ă© baseada nas caracterĂ­sticas do nervo Ăłptico, em biomarcadores oftalmolĂłgicos (destacando-se a pressĂŁo ocular) e exames subsidiĂĄrios, com destaque ao campo visual e OCT. ApĂłs o reconhecimento dos casos Ă© feito o tratamento com finalidade de estacionar a progressĂŁo da doença e melhorar a qualidade de vida dos pacientes. Contudo, estes programas tĂȘm limitaçÔes, principalmente em locais mais distantes dos grandes centros de tratamento especializado, insuficiĂȘncia de equipamentos bĂĄsicos e pessoal especializado para oferecer o rastreamento a toda a população, faltam meios para locomoção a estes centros, desinformação e desconhecimento da doença, alĂ©m de caracterĂ­sticas de progressĂŁo assintomĂĄtica da doença. Esta tese aborda soluçÔes inovadoras que podem contribuir para a automação do rastreamento do glaucoma utilizando aparelhos portĂĄteis e mais baratos, considerando as necessidades reais dos clĂ­nicos durante o rastreamento. Para isso foram realizadas revisĂ”es sistemĂĄticas sobre os mĂ©todos e equipamentos para apoio Ă  triagem automĂĄtica do glaucoma e os mĂ©todos de aprendizado profundo para a segmentação e classificação aplicĂĄveis. TambĂ©m foi feito um levantamento de questĂ”es mĂ©dicas relativas Ă  triagem do glaucoma e associĂĄ-las ao campo da inteligĂȘncia artificial, para dar mais sentido as metodologias automatizadas. AlĂ©m disso, foi criado um banco de dados privado, com vĂ­deos e imagens de retina adquiridos por um smartphone acoplado a lente de baixo custo para o rastreamento do glaucoma e avaliado com mĂ©todos do estado da arte. Foram avaliados e analisados mĂ©todos de detecção automĂĄtica de glaucoma utilizando mĂ©todos de aprendizado profundo de segmentação do disco e do copo Ăłptico em banco de dados pĂșblicos de imagens de retina. Finalmente, foram avaliadas tĂ©cnicas de mosaico e de detecção da cabeça do nervo Ăłptico em imagens de baixa qualidade obtidas para prĂ©-processamento de imagens adquiridas por smartphones acoplados a lente de baixo custo.Glaucoma is an optic neuropathy whose progression can lead to blindness. It represents the leading cause of irreversible visual loss worldwide for men and women. Early detection through screening programs carried out by specialists is based on the characteristics of the optic papilla, ophthalmic biomarkers (especially eye pressure), and subsidiary exams, emphasizing the visual field and optical coherence tomography (OCT). After recognizing the cases, the treatment is carried out to stop the progression of the disease and improve the quality of patients’ life. However, these screening programs have limitations, particularly in places further away from the sizeable, specialized treatment centers, due to the lack of essential equipment and technical personnel to offer screening to the entire population, due to the lack of means of transport to these centers, due to lack of information and lack of knowledge about the disease, considering the characteristics of asymptomatic progression of the disease. This thesis aims to develop innovative approaches to contribute to the automation of glaucoma screening using portable and cheaper devices, considering the real needs of clinicians during screening. For this, systematic reviews were carried out on the methods and equipment to support automatic glaucoma screening, and the applicable deep learning methods for segmentation and classification. A survey of medical issues related to glaucoma screening was carried out and associated with the field of artificial intelligence to make automated methodologies more effective. In addition, a private dataset was created, with videos and retina images acquired using a low-cost lens-coupled cell phone, for glaucoma screening and evaluated with state-of-the-art methods. Methods of automatic detection of glaucoma using deep learning methods of segmentation of the disc and optic cup were evaluated and analyzed in a public database of retinal images. In the case of deep learning classification methods, these were evaluated in public databases of retina images and in a private database with low-cost images. Finally, mosaic and object detection techniques were evaluated in low-quality images obtained for pre-processing images acquired by cell phones coupled with low-cost lenses

    Psychophysiological indices of recognition memory

    Get PDF
    It has recently been found that during recognition memory tests participants’ pupils dilate more when they view old items compared to novel items. This thesis sought to replicate this novel ‘‘Pupil Old/New Effect’’ (PONE) and to determine its relationship to implicit and explicit mnemonic processes, the veracity of participants’ responses, and the analogous Event-Related Potential (ERP) old/new effect. Across 9 experiments, pupil-size was measured with a video-based eye-tracker during a variety of recognition tasks, and, in the case of Experiment 8, with concurrent Electroencephalography (EEG). The main findings of this thesis are that: - the PONE occurs in a standard explicit test of recognition memory but not in “implicit” tests of either perceptual fluency or artificial grammar learning; - the PONE is present even when participants are asked to give false behavioural answers in a malingering task, or are asked not to respond at all; - the PONE is present when attention is divided both at learning and during recognition; - the PONE is accompanied by a posterior ERP old/new effect; - the PONE does not occur when participants are asked to read previously encountered words without making a recognition decision; - the PONE does not occur if participants preload an “old/new” response; - the PONE is not enhanced by repetition during learning. These findings are discussed in the context of current models of recognition memory and other psychophysiological indices of mnemonic processes. It is argued that together these findings suggest that the increase in pupil-size which occurs when participants encounter previously studied items is not under conscious control and may reflect primarily recollective processes associated with recognition memory
    corecore