2 research outputs found

    Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise

    Get PDF
    This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2–3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools

    ATTENTION BIASED SPEEDED UP ROBUST FEATURES (AB-SURF): A NEURALLY-INSPIRED OBJECT RECOGNITION ALGORITHM FOR A WEARABLE AID FOR THE VISUALLY-IMPAIRED

    No full text
    Humans recognize objects effortlessly, in spite of changes in scale, position, and illumination. Emulating human recognition in machines remains a challenge. This paper describes computer vision algorithms aimed at helping visually-impaired people locate and recognize objects. Our neurally-inspired computer vision algorithm, called Attention Biased Speeded Up Robust Features (AB-SURF), harnesses features that characterize human visual attention to make the recognition task more tractable. An attention biasing algorithm selects the most task-driven salient regions in an image. Next, the SURF object recognition algorithm is applied on this narrowed subsection of the original image. Testing on images containing 5 different objects exhibits accuracies ranging from 80 % to 100%. Furthermore, testing on images containing 10 objects yields accuracies between 63 % and 96% for the 5 objects that occupy the largest area within the image subwindows chosen by attention biasing. A five-fold speed-up is attained using AB-SURF as compared to the time estimated for sliding window recognition on the same images. into a Wearable Visual Aid to provide visually-impaired users with explicit object recognition in real time. 1.2. Wearable Visual Aid System Overview The Wearable Visual Aid system is composed of six key components, as shown in Fig. 1 below. Index Terms — Object recognition, visual attention, neurally-inspired computer vision, visual aids for the blind 1.1. Background 1
    corecore