15 research outputs found

    Interventions to promote medication adherence for chronic diseases in India: a systematic review

    Get PDF
    IntroductionCost-effective interventions that improve medication adherence are urgently needed to address the epidemic of non-communicable diseases (NCDs) in India. However, in low- and middle-income countries like India, there is a lack of analysis evaluating the effectiveness of adherence improving strategies. We conducted the first systematic review evaluating interventions aimed at improving medication adherence for chronic diseases in India.MethodsA systematic search on MEDLINE, Web of Science, Scopus, and Google Scholar was conducted. Based on a PRISMA-compliant, pre-defined methodology, randomized control trials were included which: involved subjects with NCDs; were located in India; used any intervention with the aim of improving medication adherence; and measured adherence as a primary or secondary outcome.ResultsThe search strategy yielded 1,552 unique articles of which 22 met inclusion criteria. Interventions assessed by these studies included education-based interventions (n = 12), combinations of education-based interventions with regular follow up (n = 4), and technology-based interventions (n = 2). Non-communicable diseases evaluated commonly were respiratory disease (n = 3), type 2 diabetes (n = 6), cardiovascular disease (n = 8) and depression (n = 2).ConclusionsAlthough the vast majority of primary studies supporting the conclusions were of mixed methodological quality, patient education by CHWs and pharmacists represent promising interventions to improve medication adherence, with further benefits from regular follow-up. There is need for systematic evaluation of these interventions with high quality RCTs and their implementation as part of wider health policy.Systematic review registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022345636, identifier: CRD42022345636

    Large language models approach expert-level clinical knowledge and reasoning in ophthalmology:A head-to-head cross-sectional study

    Get PDF
    Large language models (LLMs) underlie remarkable recent advanced in natural language processing, and they are beginning to be applied in clinical contexts. We aimed to evaluate the clinical potential of state-of-the-art LLMs in ophthalmology using a more robust benchmark than raw examination scores. We trialled GPT-3.5 and GPT-4 on 347 ophthalmology questions before GPT-3.5, GPT-4, PaLM 2, LLaMA, expert ophthalmologists, and doctors in training were trialled on a mock examination of 87 questions. Performance was analysed with respect to question subject and type (first order recall and higher order reasoning). Masked ophthalmologists graded the accuracy, relevance, and overall preference of GPT-3.5 and GPT-4 responses to the same questions. The performance of GPT-4 (69%) was superior to GPT-3.5 (48%), LLaMA (32%), and PaLM 2 (56%). GPT-4 compared favourably with expert ophthalmologists (median 76%, range 64–90%), ophthalmology trainees (median 59%, range 57–63%), and unspecialised junior doctors (median 43%, range 41–44%). Low agreement between LLMs and doctors reflected idiosyncratic differences in knowledge and reasoning with overall consistency across subjects and types (p > 0.05). All ophthalmologists preferred GPT-4 responses over GPT-3.5 and rated the accuracy and relevance of GPT-4 as higher (p < 0.05). LLMs are approaching expert-level knowledge and reasoning skills in ophthalmology. In view of the comparable or superior performance to trainee-grade ophthalmologists and unspecialised junior doctors, state-of-the-art LLMs such as GPT-4 may provide useful medical advice and assistance where access to expert ophthalmologists is limited. Clinical benchmarks provide useful assays of LLM capabilities in healthcare before clinical trials can be designed and conducted

    Trialling a Large Language Model (ChatGPT) in General Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and Limitations in Primary Care

    No full text
    BackgroundLarge language models exhibiting human-level performance in specialized tasks are emerging; examples include Generative Pretrained Transformer 3.5, which underlies the processing of ChatGPT. Rigorous trials are required to understand the capabilities of emerging technology, so that innovation can be directed to benefit patients and practitioners. ObjectiveHere, we evaluated the strengths and weaknesses of ChatGPT in primary care using the Membership of the Royal College of General Practitioners Applied Knowledge Test (AKT) as a medium. MethodsAKT questions were sourced from a web-based question bank and 2 AKT practice papers. In total, 674 unique AKT questions were inputted to ChatGPT, with the model’s answers recorded and compared to correct answers provided by the Royal College of General Practitioners. Each question was inputted twice in separate ChatGPT sessions, with answers on repeated trials compared to gauge consistency. Subject difficulty was gauged by referring to examiners’ reports from 2018 to 2022. Novel explanations from ChatGPT—defined as information provided that was not inputted within the question or multiple answer choices—were recorded. Performance was analyzed with respect to subject, difficulty, question source, and novel model outputs to explore ChatGPT’s strengths and weaknesses. ResultsAverage overall performance of ChatGPT was 60.17%, which is below the mean passing mark in the last 2 years (70.42%). Accuracy differed between sources (P=.04 and .06). ChatGPT’s performance varied with subject category (P=.02 and .02), but variation did not correlate with difficulty (Spearman ρ=–0.241 and –0.238; P=.19 and .20). The proclivity of ChatGPT to provide novel explanations did not affect accuracy (P>.99 and .23). ConclusionsLarge language models are approaching human expert–level performance, although further development is required to match the performance of qualified primary care physicians in the AKT. Validated high-performance models may serve as assistants or autonomous clinical tools to ameliorate the general practice workforce crisis
    corecore