3 research outputs found

    Generative Artificial Intelligence Through ChatGPT and Other Large Language Models in Ophthalmology

    No full text
    The rapid progress of large language models (LLMs) driving generative artificial intelligence applications heralds the potential of opportunities in health care. We conducted a review up to April 2023 on Google Scholar, Embase, MEDLINE, and Scopus using the following terms: “large language models,” “generative artificial intelligence,” “ophthalmology,” “ChatGPT,” and “eye,” based on relevance to this review. From a clinical viewpoint specific to ophthalmologists, we explore from the different stakeholders’ perspectives—including patients, physicians, and policymakers—the potential LLM applications in education, research, and clinical domains specific to ophthalmology. We also highlight the foreseeable challenges of LLM implementation into clinical practice, including the concerns of accuracy, interpretability, perpetuating bias, and data security. As LLMs continue to mature, it is essential for stakeholders to jointly establish standards for best practices to safeguard patient safety. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article

    A Datasheet for the INSIGHT Birmingham, Solihull, and Black Country Diabetic Retinopathy Screening Dataset

    No full text
    Purpose: Diabetic retinopathy (DR) is the most common microvascular complication associated with diabetes mellitus (DM), affecting approximately 40% of this patient population. Early detection of DR is vital to ensure monitoring of disease progression and prompt sight saving treatments as required. This article describes the data contained within the INSIGHT Birmingham, Solihull, and Black Country Diabetic Retinopathy Dataset. Design: Dataset descriptor for routinely collected eye screening data. Participants: All diabetic patients aged 12 years and older, attending annual digital retinal photography-based screening within the Birmingham, Solihull, and Black Country Eye Screening Programme. Methods: The INSIGHT Health Data Research Hub for Eye Health is a National Health Service (NHS)–led ophthalmic bioresource that provides researchers with safe access to anonymized, routinely collected data from contributing NHS hospitals to advance research for patient benefit. This report describes the INSIGHT Birmingham, Solihull, and Black Country DR Screening Dataset, a dataset of anonymized images and linked screening data derived from the United Kingdom’s largest regional DR screening program. Main Outcome Measures: This dataset consists of routinely collected data from the eye screening program. The data primarily include retinal photographs with the associated DR grading data. Additional data such as corresponding demographic details, information regarding patients’ diabetic status, and visual acuity data are also available. Further details regarding available data points are available in the supplementary information, in addition to the INSIGHT webpage included below. Results: At the time point of this analysis (December 31, 2019), the dataset comprised 6 202 161 images from 246 180 patients, with a dataset inception date of January 1, 2007. The dataset includes 1 360 547 grading episodes between R0M0 and R3M1. Conclusions: This dataset descriptor article summarizes the content of the dataset, how it has been curated, and what its potential uses are. Data are available through a structured application process for research studies that support discovery, clinical evidence analyses, and innovation in artificial intelligence technologies for patient benefit. Further information regarding the data repository and contact details can be found at https://www.insight.hdrhub.org/. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article

    SynthEye: Investigating the Impact of Synthetic Data on Artificial Intelligence-assisted Gene Diagnosis of Inherited Retinal Disease

    No full text
    Purpose: Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs). Design: Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning. Participants: Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes. Methods: A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data. Main Outcome Measures: We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen’s Kappa (κ). Results: An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]). Conclusions: Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data.Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references
    corecore