864 research outputs found

    Feedback and surround modulated boundary detection

    Get PDF
    Altres ajuts: CERCA Programme/Generalitat de CatalunyaEdges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The "classical approach" assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal fromV1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    Evolving Fuzzy Classifiers: Application to Incremental Learning of Handwritten Gesture Recognition Systems

    No full text
    International audienceIn this paper, we present a new method to design customizable self-evolving fuzzy rule-based classifiers. The presented approach combines an incremental clustering algorithm with a fuzzy adaptation method in order to learn and maintain the model. We use this method to build an evolving handwritten gesture recognition system. The self-adaptive nature of this system allows it to start its learning process with few learning data, to continuously adapt and evolve according to any new data, and to remain robust when introducing a new unseen class at any moment in the life-long learning process

    Automated retinal analysis

    Get PDF
    Diabetes is a chronic disease affecting over 2% of the population in the UK [1]. Long-term complications of diabetes can affect many different systems of the body including the retina of the eye. In the retina, diabetes can lead to a disease called diabetic retinopathy, one of the leading causes of blindness in the working population of industrialised countries. The risk of visual loss from diabetic retinopathy can be reduced if treatment is given at the onset of sight-threatening retinopathy. To detect early indicators of the disease, the UK National Screening Committee have recommended that diabetic patients should receive annual screening by digital colour fundal photography [2]. Manually grading retinal images is a subjective and costly process requiring highly skilled staff. This thesis describes an automated diagnostic system based oil image processing and neural network techniques, which analyses digital fundus images so that early signs of sight threatening retinopathy can be identified. Within retinal analysis this research has concentrated on the development of four algorithms: optic nerve head segmentation, lesion segmentation, image quality assessment and vessel width measurements. This research amalgamated these four algorithms with two existing techniques to form an integrated diagnostic system. The diagnostic system when used as a 'pre-filtering' tool successfully reduced the number of images requiring human grading by 74.3%: this was achieved by identifying and excluding images without sight threatening maculopathy from manual screening

    Personalizable Pen-Based Interface Using Life-Long Learning

    No full text
    International audienceIn this paper, we present a new method to design customizable self-evolving fuzzy rule-based classifiers. The presented approach combines an incremental clustering algorithm with a fuzzy adaptation method in order to learn and maintain the model. We use this method to build an evolving handwritten gesture recognition system, that can be integrated into an application to provide personalization capabilities. Experiments on an on-line gesture database were performed by considering various user personalization scenarios. The experiments show that the proposed evolving gesture recognition system continuously adapts and evolve according to new data of learned classes, and remains robust when introducing new unseen classes, at any moment during the lifelong learning process

    Machine learning methods for sign language recognition: a critical review and analysis.

    Get PDF
    Sign language is an essential tool to bridge the communication gap between normal and hearing-impaired people. However, the diversity of over 7000 present-day sign languages with variability in motion position, hand shape, and position of body parts making automatic sign language recognition (ASLR) a complex system. In order to overcome such complexity, researchers are investigating better ways of developing ASLR systems to seek intelligent solutions and have demonstrated remarkable success. This paper aims to analyse the research published on intelligent systems in sign language recognition over the past two decades. A total of 649 publications related to decision support and intelligent systems on sign language recognition (SLR) are extracted from the Scopus database and analysed. The extracted publications are analysed using bibliometric VOSViewer software to (1) obtain the publications temporal and regional distributions, (2) create the cooperation networks between affiliations and authors and identify productive institutions in this context. Moreover, reviews of techniques for vision-based sign language recognition are presented. Various features extraction and classification techniques used in SLR to achieve good results are discussed. The literature review presented in this paper shows the importance of incorporating intelligent solutions into the sign language recognition systems and reveals that perfect intelligent systems for sign language recognition are still an open problem. Overall, it is expected that this study will facilitate knowledge accumulation and creation of intelligent-based SLR and provide readers, researchers, and practitioners a roadmap to guide future direction
    • …
    corecore