5 research outputs found

    AI and Ethics: A Systematic Review of the Ethical Considerations of Large Language Model Use in Surgery Research

    No full text
    Introduction: As large language models receive greater attention in medical research, the investigation of ethical considerations is warranted. This review aims to explore surgery literature to identify ethical concerns surrounding these artificial intelligence models and evaluate how autonomy, beneficence, nonmaleficence, and justice are represented within these ethical discussions to provide insights in order to guide further research and practice. Methods: A systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Five electronic databases were searched in October 2023. Eligible studies included surgery-related articles that focused on large language models and contained adequate ethical discussion. Study details, including specialty and ethical concerns, were collected. Results: The literature search yielded 1179 articles, with 53 meeting the inclusion criteria. Plastic surgery, orthopedic surgery, and neurosurgery were the most represented surgical specialties. Autonomy was the most explicitly cited ethical principle. The most frequently discussed ethical concern was accuracy (n = 45, 84.9%), followed by bias, patient confidentiality, and responsibility. Conclusion: The ethical implications of using large language models in surgery are complex and evolving. The integration of these models into surgery necessitates continuous ethical discourse to ensure responsible and ethical use, balancing technological advancement with human dignity and safety

    Artificial-Intelligence-Based Clinical Decision Support Systems in Primary Care: A Scoping Review of Current Clinical Implementations

    No full text
    Primary Care Physicians (PCPs) are the first point of contact in healthcare. Because PCPs face the challenge of managing diverse patient populations while maintaining up-to-date medical knowledge and updated health records, this study explores the current outcomes and effectiveness of implementing Artificial Intelligence-based Clinical Decision Support Systems (AI-CDSSs) in Primary Healthcare (PHC). Following the PRISMA-ScR guidelines, we systematically searched five databases, PubMed, Scopus, CINAHL, IEEE, and Google Scholar, and manually searched related articles. Only CDSSs powered by AI targeted to physicians and tested in real clinical PHC settings were included. From a total of 421 articles, 6 met our criteria. We found AI-CDSSs from the US, Netherlands, Spain, and China whose primary tasks included diagnosis support, management and treatment recommendations, and complication prediction. Secondary objectives included lessening physician work burden and reducing healthcare costs. While promising, the outcomes were hindered by physicians’ perceptions and cultural settings. This study underscores the potential of AI-CDSSs in improving clinical management, patient satisfaction, and safety while reducing physician workload. However, further work is needed to explore the broad spectrum of applications that the new AI-CDSSs have in several PHC real clinical settings and measure their clinical outcomes

    Comparative Analysis of Artificial Intelligence Virtual Assistant and Large Language Models in Post-Operative Care

    No full text
    In postoperative care, patient education and follow-up are pivotal for enhancing the quality of care and satisfaction. Artificial intelligence virtual assistants (AIVA) and large language models (LLMs) like Google BARD and ChatGPT-4 offer avenues for addressing patient queries using natural language processing (NLP) techniques. However, the accuracy and appropriateness of the information vary across these platforms, necessitating a comparative study to evaluate their efficacy in this domain. We conducted a study comparing AIVA (using Google Dialogflow) with ChatGPT-4 and Google BARD, assessing the accuracy, knowledge gap, and response appropriateness. AIVA demonstrated superior performance, with significantly higher accuracy (mean: 0.9) and lower knowledge gap (mean: 0.1) compared to BARD and ChatGPT-4. Additionally, AIVA’s responses received higher Likert scores for appropriateness. Our findings suggest that specialized AI tools like AIVA are more effective in delivering precise and contextually relevant information for postoperative care compared to general-purpose LLMs. While ChatGPT-4 shows promise, its performance varies, particularly in verbal interactions. This underscores the importance of tailored AI solutions in healthcare, where accuracy and clarity are paramount. Our study highlights the necessity for further research and the development of customized AI solutions to address specific medical contexts and improve patient outcomes

    Artificial Intelligence Support for Informal Patient Caregivers: A Systematic Review

    No full text
    This study aims to explore how artificial intelligence can help ease the burden on caregivers, filling a gap in current research and healthcare practices due to the growing challenge of an aging population and increased reliance on informal caregivers. We conducted a search with Google Scholar, PubMed, Scopus, IEEE Xplore, and Web of Science, focusing on AI and caregiving. Our inclusion criteria were studies where AI supports informal caregivers, excluding those solely for data collection. Adhering to PRISMA 2020 guidelines, we eliminated duplicates and screened for relevance. From 947 initially identified articles, 10 met our criteria, focusing on AI’s role in aiding informal caregivers. These studies, conducted between 2012 and 2023, were globally distributed, with 80% employing machine learning. Validation methods varied, with Hold-Out being the most frequent. Metrics across studies revealed accuracies ranging from 71.60% to 99.33%. Specific methods, like SCUT in conjunction with NNs and LibSVM, showcased accuracy between 93.42% and 95.36% as well as F-measures spanning 93.30% to 95.41%. AUC values indicated model performance variability, ranging from 0.50 to 0.85 in select models. Our review highlights AI’s role in aiding informal caregivers, showing promising results despite different approaches. AI tools provide smart, adaptive support, improving caregivers’ effectiveness and well-being
    corecore