2,001 research outputs found

    Magnetic Resonance Imaging of Optic Nerve Traction During Adduction in Primary Open-Angle Glaucoma With Normal Intraocular Pressure.

    Get PDF
    PurposeWe used magnetic resonance imaging (MRI) to ascertain effects of optic nerve (ON) traction in adduction, a phenomenon proposed as neuropathic in primary open-angle glaucoma (POAG).MethodsSeventeen patients with POAG and maximal IOP ≤ 20 mm Hg, and 31 controls underwent MRI in central gaze and 20° to 30° abduction and adduction. Optic nerve and sheath area centroids permitted computation of midorbital lengths versus minimum paths.ResultsAverage mean deviation (±SEM) was -8.2 ± 1.2 dB in the 15 patients with POAG having interpretable perimetry. In central gaze, ON path length in POAG was significantly more redundant (104.5 ± 0.4% of geometric minimum) than in controls (102.9 ± 0.4%, P = 2.96 × 10-4). In both groups the ON became significantly straighter in adduction (28.6 ± 0.8° in POAG, 26.8 ± 1.1° in controls) than central gaze and abduction. In adduction, the ON in POAG straightened to 102.0% ± 0.2% of minimum path length versus 104.5% ± 0.4% in central gaze (P = 5.7 × 10-7), compared with controls who straightened to 101.6% ± 0.1% from 102.9% ± 0.3% in central gaze (P = 8.7 × 10-6); and globes retracted 0.73 ± 0.09 mm in POAG, but only 0.07 ± 0.08 mm in controls (P = 8.8 × 10-7). Both effects were confirmed in age-matched controls, and remained significant after correction for significant effects of age and axial globe length (P = 0.005).ConclusionsAlthough tethering and elongation of ON and sheath are normal in adduction, adduction is associated with abnormally great globe retraction in POAG without elevated IOP. Traction in adduction may cause mechanical overloading of the ON head and peripapillary sclera, thus contributing to or resulting from the optic neuropathy of glaucoma independent of IOP

    An Efficient Machine Learning Approach for Prediction of Conjunctiva Hyperemia Assessment using Feature Extraction Methods

    Get PDF
    The human eye is one of the most intricate sense organs. It is crucial to protect your eyes against several eye disorders that can cause vision loss if untreated in order to maintain your ability to see well. Early detection of eye diseases is therefore crucial in order to prevent any unintended consequences and control the diseases continued progression. Conjunctivitis is one such eye condition that is characterized by conjunctival inflammation, resulting in symptoms like hyperemia (redness) due to increased blood flow. With the aid of the best treatments, modern techniques, and early, precise diagnosis by professionals, it can be cured or can be greatly reduced. The proper diagnosis of the underlying cause of visual problems is frequently postponed or never carried out because of  shortage of diagnostic experts, which leads to either insufficient or postponed corrective treatment. In order to diagnose and evaluate conjunctivitis, segmentation methods are essential for locating and measuring hyperemic regions. In the present study, segmentation techniques are applied along  with feature extraction techniques to provide an effective machine learning framework for the prediction of eye problems. Using the discrete cosine transform (DCT), the segmented regions of interest are converted into feature vectors. These feature vectors are then used to train machine learning classifiers, including random forest and neural networks, which achieve a promising accuracy of 95.92%. This approach enables ophthalmologists to make more objective and accurate assessments, aiding in disease severity evaluation

    Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)

    Full text link
    The advent of foundation models signals a new era in artificial intelligence. The Segment Anything Model (SAM) is the first foundation model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded in virtual reality setups. The increasing requirement for annotated eye-image datasets presents a significant opportunity for SAM to redefine the landscape of data annotation in gaze estimation. Our investigation centers on SAM's zero-shot learning abilities and the effectiveness of prompts like bounding boxes or point clicks. Our results are consistent with studies in other domains, demonstrating that SAM's segmentation effectiveness can be on-par with specialized models depending on the feature, with prompts improving its performance, evidenced by an IoU of 93.34% for pupil segmentation in one dataset. Foundation models like SAM could revolutionize gaze estimation by enabling quick and easy image segmentation, reducing reliance on specialized models and extensive manual annotation.Comment: 14 pages, 8 figures, 1 table, submitted to ETRA 2024: ACM Symposium on Eye Tracking Research & Application

    Handbook of Vascular Biometrics

    Get PDF

    Improving Iris Recognition through Quality and Interoperability Metrics

    Get PDF
    The ability to identify individuals based on their iris is known as iris recognition. Over the past decade iris recognition has garnered much attention because of its strong performance in comparison with other mainstream biometrics such as fingerprint and face recognition. Performance of iris recognition systems is driven by application scenario requirements. Standoff distance, subject cooperation, underlying optics, and illumination are a few examples of these requirements which dictate the nature of images an iris recognition system has to process. Traditional iris recognition systems, dubbed stop and stare , operate under highly constrained conditions. This ensures that the captured image is of sufficient quality so that the success of subsequent processing stages, segmentation, encoding, and matching are not compromised. When acquisition constraints are relaxed, such as for surveillance or iris on the move, the fidelity of subsequent processing steps lessens.;In this dissertation we propose a multi-faceted framework for mitigating the difficulties associated with non-ideal iris. We develop and investigate a comprehensive iris image quality metric that is predictive of iris matching performance. The metric is composed of photometric measures such as defocus, motion blur, and illumination, but also contains domain specific measures such as occlusion, and gaze angle. These measures are then combined through a fusion rule based on Dempster-Shafer theory. Related to iris segmentation, which is arguably one of the most important tasks in iris recognition, we develop metrics which are used to evaluate the precision of the pupil and iris boundaries. Furthermore, we illustrate three methods which take advantage of the proposed segmentation metrics for rectifying incorrect segmentation boundaries. Finally, we look at the issue of iris image interoperability and demonstrate that techniques from the field of hardware fingerprinting can be utilized to improve iris matching performance when images captured from distinct sensors are involved

    Feasibility of smartphone colorimetry of the face as an anaemia screening tool for infants and young children in Ghana

    Get PDF
    Background Anaemia affects approximately a quarter of the global population. When anaemia occurs during childhood, it can increase susceptibility to infectious diseases and impair cognitive development. This research uses smartphone-based colorimetry to develop a non-invasive technique for screening for anaemia in a previously understudied population of infants and young children in Ghana. Methods We propose a colorimetric algorithm for screening for anaemia which uses a novel combination of three regions of interest: the lower eyelid (palpebral conjunctiva), the sclera, and the mucosal membrane adjacent to the lower lip. These regions are chosen to have minimal skin pigmentation occluding the blood chromaticity. As part of the algorithm development, different methods were compared for (1) accounting for varying ambient lighting, and (2) choosing a chromaticity metric for each region of interest. In comparison to some prior work, no specialist hardware (such as a colour reference card) is required for image acquisition. Results Sixty-two patients under 4 years of age were recruited as a convenience clinical sample in Korle Bu Teaching Hospital, Ghana. Forty-three of these had quality images for all regions of interest. Using a naïve Bayes classifier, this method was capable of screening for anaemia (<11.0g/dL haemoglobin concentration) vs healthy blood haemoglobin concentration (≥11.0g/dL) with a sensitivity of 92.9% (95% CI 66.1% to 99.8%), a specificity of 89.7% (72.7% to 97.8%) when acting on unseen data, using only an affordable smartphone and no additional hardware. Conclusion These results add to the body of evidence suggesting that smartphone colorimetry is likely to be a useful tool for making anaemia screening more widely available. However, there remains no consensus on the optimal method for image preprocessing or feature extraction, especially across diverse patient populations

    Peripapillary and macular choroidal thickness in glaucoma.

    Get PDF
    PurposeTo compare choroidal thickness (CT) between individuals with and without glaucomatous damage and to explore the association of peripapillary and submacular CT with glaucoma severity using spectral domain optical coherence tomography (SD-OCT).MethodsNinety-one eyes of 20 normal subjects and 43 glaucoma patients from the UCLA SD-OCT Imaging Study were enrolled. Imaging was performed using Cirrus HD-OCT. Choroidal thickness was measured at four predetermined points in the macular and peripapillary regions, and compared between glaucoma and control groups before and after adjusting for potential confounding variables.ResultsThe average (± standard deviation) mean deviation (MD) on visual fields was -0.3 (±2.0) dB in controls and -3.5 (±3.5) dB in glaucoma patients. Age, axial length and their interaction were the most significant factors affecting CT on multivariate analysis. Adjusted average CT (corrected for age, axial length, their interaction, gender and lens status) however, was not different between glaucoma patients and the control group (P=0.083) except in the temporal parafoveal region (P=0.037); nor was choroidal thickness related to glaucoma severity (r=-0.187, P=0.176 for correlation with MD, r=-0.151, P=0.275 for correlation with average nerve fiber layer thickness).ConclusionsChoroidal thickness of the macular and peripapillary regions is not decreased in glaucoma. Anatomical measurements with SD-OCT do not support the possible influence of the choroid on the pathophysiology of glaucoma
    • …
    corecore