336 research outputs found

    A Clinical-oriented Multi-level Contrastive Learning Method for Disease Diagnosis in Low-quality Medical Images

    Full text link
    Representation learning offers a conduit to elucidate distinctive features within the latent space and interpret the deep models. However, the randomness of lesion distribution and the complexity of low-quality factors in medical images pose great challenges for models to extract key lesion features. Disease diagnosis methods guided by contrastive learning (CL) have shown significant advantages in lesion feature representation. Nevertheless, the effectiveness of CL is highly dependent on the quality of the positive and negative sample pairs. In this work, we propose a clinical-oriented multi-level CL framework that aims to enhance the model's capacity to extract lesion features and discriminate between lesion and low-quality factors, thereby enabling more accurate disease diagnosis from low-quality medical images. Specifically, we first construct multi-level positive and negative pairs to enhance the model's comprehensive recognition capability of lesion features by integrating information from different levels and qualities of medical images. Moreover, to improve the quality of the learned lesion embeddings, we introduce a dynamic hard sample mining method based on self-paced learning. The proposed CL framework is validated on two public medical image datasets, EyeQ and Chest X-ray, demonstrating superior performance compared to other state-of-the-art disease diagnostic methods

    How Generalizable Are Foundation Models When Applied to Different Demographic Groups and Settings?

    Get PDF
    RETFound is a retinal image–based foundational artificial intelligence (AI) model that can be fine-tuned to downstream tasks. However, its generalizability to Asian populations remains unclear. In this study, we fine-tuned RETFound on an Asian-specific dataset. We then evaluated the performance of RETFound versus a conventional Vision Transformer model (pretrained on ImageNet) in diagnosing glaucoma and coronary heart disease and predicting the 3-year risk of stroke in an Asian population. When fine-tuned on a “full” dataset, RETFound showed no significant improvement compared with a conventional Vision Transformer model (area under the curves [AUCs] of 0.863, 0.628, and 0.557 vs. 0.853, 0.621, and 0.543, respectively; all P≥0.2). Furthermore, in scenarios with limited training data (fine-tuned on ≤25% of the full dataset), RETFound showed a slight advantage (up to a maximum AUC increase of 0.03). However, these improvements were not statistically significant (all P≥0.2). These findings indicate the challenges foundational AI models face in adapting to diverse demographics, emphasizing the need for more diverse data in current foundation models and the importance of global collaboration on foundation model research

    EyeFound: A Multimodal Generalist Foundation Model for Ophthalmic Imaging

    Full text link
    Artificial intelligence (AI) is vital in ophthalmology, tackling tasks like diagnosis, classification, and visual question answering (VQA). However, existing AI models in this domain often require extensive annotation and are task-specific, limiting their clinical utility. While recent developments have brought about foundation models for ophthalmology, they are limited by the need to train separate weights for each imaging modality, preventing a comprehensive representation of multi-modal features. This highlights the need for versatile foundation models capable of handling various tasks and modalities in ophthalmology. To address this gap, we present EyeFound, a multimodal foundation model for ophthalmic images. Unlike existing models, EyeFound learns generalizable representations from unlabeled multimodal retinal images, enabling efficient model adaptation across multiple applications. Trained on 2.78 million images from 227 hospitals across 11 ophthalmic modalities, EyeFound facilitates generalist representations and diverse multimodal downstream tasks, even for detecting challenging rare diseases. It outperforms previous work RETFound in diagnosing eye diseases, predicting systemic disease incidents, and zero-shot multimodal VQA. EyeFound provides a generalizable solution to improve model performance and lessen the annotation burden on experts, facilitating widespread clinical AI applications for retinal imaging.Comment: 21 pages, 2 figures, 4 table

    Innovative care models: expanding nurses' and optometrists' roles in ophthalmology

    Get PDF
    The expanding demands of healthcare necessitate novel methods of increasing the supply of trained professionals to enhance the delivery of care services. One means of doing so is to expand allied health professionals’ scope of practice. This paper explores the ethics of two examples of such expansion in ophthalmology, comparing the widely accepted practice of nurses administering intravitreal injections and the relatively less prevalent optometrists functioning as physician extenders. We conducted a literature review of empirical research into both practices and conclude that nurses administering intravitreal injections are ethically justified. With adequate standardized training, optometrists can also function as primary eye care providers to improve accessibility to eye care. We provide an algorithm for the ethical introduction of innovative expanded allied healthcare

    Top 100 cited articles in ophthalmic epidemiology between 2006 and 2016

    Get PDF
    AIM: To identify the most-cited articles in ophthalmic epidemiology over the last decade. METHODS: We performed a cited reference search on articles that were included in the ISI Web of Science database using the terms “Epidemi*” AND “ophthalm*” AND “population*” during year 2006 to 2016. TOP 100 most cited articles (T100) in ophthalmic epidemiology were short listed and analysed using bibliometrics. RESULTS: These top 100 articles in ophthalmic epidemiology were cited between 61 to 333 times. Of these T100 articles, 36% originated from United States, and 34% were published in the Ophthalmology journal. The three major topics identified were age-related macular degeneration (AMD, n=23), glaucoma (n=16) and visual impairment (n=12). The top-cited article was a study on outdoor activities and its association with the prevalence of myopia in school-aged children, published in 2008. CONCLUSION: This bibliometric analysis provides useful insights into the current development in ophthalmic epidemiology in the past decade and can help recognizing the quality of the researches, discoveries, and trends steering ophthalmic epidemiology

    Evolution of Future Medical AI Models — From Task-Specific, Disease-Centric to Universal Health

    Get PDF
    Medical artificial intelligence (MAI) has evolved from traditional machine learning to deep learning and from supervised methodologies to unsupervised learning paradigms. Recently, the focus has shifted from task-specific to generalized medical artificial intelligence (GMAI) models. These new artificial intelligence (AI) models and algorithms still need to be translated to clinical use in various settings. This article discusses the foreseeable transition from specialized MAI models toward more universally applicable models. We introduce two concepts as new paradigms: universal medical artificial intelligence (UMAI) and universal health artificial intelligence (UHAI). UMAI models will be distinguished from GMAI by their capability to emulate critical aspects of human intelligence necessary in clinical practice, particularly physician empathy and intuition. UHAI further expands beyond addressing disease states, a domain of UMAI, and covers health maintenance and disease prevention, shifting from relying solely on traditional clinical data to integrating broader nonclinical data to allow for the incorporation of AI into a more holistic understanding of human health and disease origin. Outlined here are key research priorities and future pathways from GMAI to UMAI and subsequently, UHAI, allowing AI to be more integrated, intuitive, and attuned to the needs of patients, physicians, and society

    Association of Common SIX6 Polymorphisms With Peripapillary Retinal Nerve Fiber Layer Thickness: The Singapore Chinese Eye Study

    Get PDF
    PURPOSE. Recently the common SIX6 missense variant rs33912345 was found to be highly associated with glaucoma. The aim of this study was to investigate the association between this SIX6 variant and peripapillary retinal nerve fiber layer (RNFL) thickness measured by spectral-domain optical coherence tomography (SD-OCT) in a population setting. METHODS. Study subjects were enrolled from the Singapore Chinese Eye Study (SCES), a population-based survey of Singaporean Chinese aged 40 years or older. Subjects underwent a comprehensive ocular examination. Spectral-domain OCT was used to measure RNFL thicknesses. Genotyping of SIX6 rs33912345 (Asn141His) was performed using HumanExome BeadChip. RESULTS. A total of 2129 eyes from 1243 SCES subjects (mean age: 55.0 6 7.4 years) with rs33912345 genotype data and SD-OCT images were included for the analysis. Of these, 26 eyes of 21 subjects had glaucoma. The frequency of rs33912345 risk variant C (His141) was 80% in the study subjects. Each rs33912345 C allele was associated with a decrease of 1.44 lm in RNFL thickness after adjusting for age, sex, genetic principal components, and axial length (P ¼ 0.001). These associations remained similar in 2096 nonglaucoma eyes in which each C allele was associated with a decrease of 1.39 lm in RNFL thickness (P ¼ 0.001). The strongest association was observed in the superior RNFL sector (a decrease of 2.83 lm per risk allele, P < 0.001) followed by the inferior RNFL sector (a decrease of 2.24 lm per risk allele, P ¼ 0.003), while the association did not reach significance in the nasal and temporal sectors. CONCLUSIONS. Nonglaucomatous individuals with the SIX6 missense variant have reduced RNFL thickness in regions known to be particularly affected in those with glaucoma. This may be the primary mechanism for increased risk of POAG in individuals who carry the SIX6 His141 risk variant

    Author Correction: Cross-ancestry genome-wide association analysis of corneal thickness strengthens link between complex and Mendelian eye diseases.

    Get PDF
    Emmanuelle Souzeau, who contributed to analysis of data, was inadvertently omitted from the author list in the originally published version of this Article. This has now been corrected in both the PDF and HTML versions of the Article

    Determinants of lamina cribrosa depth in healthy Asian eyes: the Singapore Epidemiology Eye Study

    Get PDF
    Aim To investigate the determinants of lamina cribrosa depth (LCD) in healthy eyes of Chinese and Indian Singaporean adults. Methods The optic nerve head (ONH) of the right eye of 1396 subjects (628 Chinese and 768 Indian subjects) was imaged with optical coherence tomography (OCT, Spectralis, Heidelberg, Germany). LCD was defined as the distance from the Bruch’s membrane opening (LCD-BMO) or the peripapillary sclera (LCD-PPS) reference plane to the laminar surface. A linear regression model was used to evaluate the relationship between the LCD and its determinants. Results Both LCDs were significantly different between the two races (LCD-BMO: 421.95 (95% CI 365.32 to 491.79) µm in Chinese vs 430.39 (367.46–509.81) µm in Indians, p=0.021; and LCD-PPS: 353.34 (300.98–421.45) µm in Chinese vs 376.76 (313.39–459.78) µm in Indians, p<0.001). In the multivariable regression analysis, the LCD-PPS of the whole cohort was independently associated with females (β=−31.93, p<0.001), Indians subjects (β=21.39, p=0.004) (Chinese as the reference), axial length (Axl) (β=−6.68, p=0.032), retinal nerve fibre layer thickness (RNFL) (β=0.71, p=0.019), choroidal thickness (ChT) (β=0.41, p<0.001), vertical cup disc ratio (VCDR) (β=24.42, p<0.001) and disc size (β=−60.75, p=0.001). For every 1 year older in age, the LCD-PPS was deeper on average by 1.95 µm in Chinese subjects (p=0.01) but there was no association in Indians subjects (p=0.851). Conclusions The LCD was influenced by age, gender, race, Axl, RNFL, ChT, VCDR and disc size. This normative LCD database may facilitate a more accurate assessment of ONH cupping using OCT in Asian populations

    Cardiovascular disease risk assessment using a deep-learning-based retinal biomarker: a comparison with existing risk scores

    Get PDF
    Aims: This study aims to evaluate the ability of a deep-learning-based cardiovascular disease (CVD) retinal biomarker, Reti-CVD, to identify individuals with intermediate- and high-risk for CVD. Methods and results: We defined the intermediate- and high-risk groups according to Pooled Cohort Equation (PCE), QRISK3, and modified Framingham Risk Score (FRS). Reti-CVD’s prediction was compared to the number of individuals identified as intermediate- and high-risk according to standard CVD risk assessment tools, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated to assess the results. In the UK Biobank, among 48 260 participants, 20 643 (42.8%) and 7192 (14.9%) were classified into the intermediate- and high-risk groups according to PCE, and QRISK3, respectively. In the Singapore Epidemiology of Eye Diseases study, among 6810 participants, 3799 (55.8%) were classified as intermediate- and high-risk group according to modified FRS. Reti-CVD identified PCE-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.7%, 87.6%, 86.5%, and 84.0%, respectively. Reti-CVD identified QRISK3-based intermediate- and high-risk groups with a sensitivity, specificity, PPV, and NPV of 82.6%, 85.5%, 49.9%, and 96.6%, respectively. Reti-CVD identified intermediate- and high-risk groups according to the modified FRS with a sensitivity, specificity, PPV, and NPV of 82.1%, 80.6%, 76.4%, and 85.5%, respectively. Conclusion: The retinal photograph biomarker (Reti-CVD) was able to identify individuals with intermediate and high-risk for CVD, in accordance with existing risk assessment tools
    corecore