914 research outputs found
Opportunities and Challenges for ChatGPT and Large Language Models in Biomedicine and Health
ChatGPT has drawn considerable attention from both the general public and
domain experts with its remarkable text generation capabilities. This has
subsequently led to the emergence of diverse applications in the field of
biomedicine and health. In this work, we examine the diverse applications of
large language models (LLMs), such as ChatGPT, in biomedicine and health.
Specifically we explore the areas of biomedical information retrieval, question
answering, medical text summarization, information extraction, and medical
education, and investigate whether LLMs possess the transformative power to
revolutionize these tasks or whether the distinct complexities of biomedical
domain presents unique challenges. Following an extensive literature survey, we
find that significant advances have been made in the field of text generation
tasks, surpassing the previous state-of-the-art methods. For other
applications, the advances have been modest. Overall, LLMs have not yet
revolutionized the biomedicine, but recent rapid progress indicates that such
methods hold great potential to provide valuable means for accelerating
discovery and improving health. We also find that the use of LLMs, like
ChatGPT, in the fields of biomedicine and health entails various risks and
challenges, including fabricated information in its generated responses, as
well as legal and privacy concerns associated with sensitive patient data. We
believe this first-of-its-kind survey can provide a comprehensive overview to
biomedical researchers and healthcare practitioners on the opportunities and
challenges associated with using ChatGPT and other LLMs for transforming
biomedicine and health
Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework
This study conducts a thorough examination of the research stream focusing on
AI risks in healthcare, aiming to explore the distinct genres within this
domain. A selection criterion was employed to carefully analyze 39 articles to
identify three primary genres of AI risks prevalent in healthcare: clinical
data risks, technical risks, and socio-ethical risks. Selection criteria was
based on journal ranking and impact factor. The research seeks to provide a
valuable resource for future healthcare researchers, furnishing them with a
comprehensive understanding of the complex challenges posed by AI
implementation in healthcare settings. By categorizing and elucidating these
genres, the study aims to facilitate the development of empirical qualitative
and quantitative research, fostering evidence-based approaches to address
AI-related risks in healthcare effectively. This endeavor contributes to
building a robust knowledge base that can inform the formulation of risk
mitigation strategies, ensuring safe and efficient integration of AI
technologies in healthcare practices. Thus, it is important to study AI risks
in healthcare to build better and efficient AI systems and mitigate risks
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
With the widespread use of machine learning (ML) techniques, ML as a service
has become increasingly popular. In this setting, an ML model resides on a
server and users can query it with their data via an API. However, if the
user's input is sensitive, sending it to the server is undesirable and
sometimes even legally not possible. Equally, the service provider does not
want to share the model by sending it to the client for protecting its
intellectual property and pay-per-query business model.
In this paper, we propose MLCapsule, a guarded offline deployment of machine
learning as a service. MLCapsule executes the model locally on the user's side
and therefore the data never leaves the client. Meanwhile, MLCapsule offers the
service provider the same level of control and security of its model as the
commonly used server-side execution. In addition, MLCapsule is applicable to
offline applications that require local execution. Beyond protecting against
direct model access, we couple the secure offline deployment with defenses
against advanced attacks on machine learning models such as model stealing,
reverse engineering, and membership inference
Privacy-preserving data sharing infrastructures for medical research: systematization and comparison
Background: Data sharing is considered a crucial part of modern medical research. Unfortunately, despite its advantages, it often faces obstacles, especially data privacy challenges. As a result, various approaches and infrastructures have been developed that aim to ensure that patients and research participants remain anonymous when data is shared. However, privacy protection typically comes at a cost, e.g. restrictions regarding the types of analyses that can be performed on shared data. What is lacking is a systematization making the trade-offs taken by different approaches transparent. The aim of the work described in this paper was to develop a systematization for the degree of privacy protection provided and the trade-offs taken by different data sharing methods. Based on this contribution, we categorized popular data sharing approaches and identified research gaps by analyzing combinations of promising properties and features that are not yet supported by existing approaches.
Methods: The systematization consists of different axes. Three axes relate to privacy protection aspects and were adopted from the popular Five Safes Framework: (1) safe data, addressing privacy at the input level, (2) safe settings, addressing privacy during shared processing, and (3) safe outputs, addressing privacy protection of analysis results. Three additional axes address the usefulness of approaches: (4) support for de-duplication, to enable the reconciliation of data belonging to the same individuals, (5) flexibility, to be able to adapt to different data analysis requirements, and (6) scalability, to maintain performance with increasing complexity of shared data or common analysis processes.
Results: Using the systematization, we identified three different categories of approaches: distributed data analyses, which exchange anonymous aggregated data, secure multi-party computation protocols, which exchange encrypted data, and data enclaves, which store pooled individual-level data in secure environments for access for analysis purposes. We identified important research gaps, including a lack of approaches enabling the de-duplication of horizontally distributed data or providing a high degree of flexibility.
Conclusions: There are fundamental differences between different data sharing approaches and several gaps in their functionality that may be interesting to investigate in future work. Our systematization can make the properties of privacy-preserving data sharing infrastructures more transparent and support decision makers and regulatory authorities with a better understanding of the trade-offs taken
Cybersecurity Vulnerabilities in Medical Devices: A Complex Environment and Multifaceted Problem
The increased connectivity to existing computer networks has exposed medical devices to cybersecurity vulnerabilities from which they were previously shielded. For the prevention of cybersecurity incidents, it is important to recognize the complexity of the operational environment as well as to catalog the technical vulnerabilities. Cybersecurity protection is not just a technical issue; it is a richer and more intricate problem to solve. A review of the factors that contribute to such a potentially insecure environment, together with the identification of the vulnerabilities, is important for understanding why these vulnerabilities persist and what the solution space should look like. This multifaceted problem must be viewed from a systemic perspective if adequate protection is to be put in place and patient safety concerns addressed. This requires technical controls, governance, resilience measures, consolidated reporting, context expertise, regulation, and standards. It is evident that a coordinated, proactive approach to address this complex challenge is essential. In the interim, patient safety is under threat
Fair and equitable AI in biomedical research and healthcare:Social science perspectives
Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference “Fair medicine and AI” (online 3–5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation
Beyond Protecting Genetic Privacy: Understanding Genetic Discrimination Through its Disparate Impact on Racial Minorities
At the very end of the last century, scientists produced the first draft of the whole human genetic sequence. But that was just the first step; the hard work of the first few decades of this century will be to learn more about how to apply genetic information to improve health. As the pace of technological development accelerates and we learn more about what genetic variations mean about individual human characteristics and health risks, so too does the risk and consequences of the misuse of such information become more significant. The principal answer to this challenge has been to safeguard privacy by constructing legal and technical barriers that conceal and anonymize genetic information. While it may be a worthwhile objective, ultimately privacy protections will likely fail in practice. If this is so, how can we prevent genetic information from being used to categorize, stigmatize, and subordinate? This Note approaches this problem by analyzing the African American experience with genetic discrimination in the United States. African Americans have confronted the adverse consequences of genetic research in ways that can serve as a foundation to understand future threats posed to racial minorities and everyone in society, as genetic testing increases in prevalence and the privacy of genetic information is unable to be protected. Studying the real history of genetic discrimination, rather than merely speculating about what may happen, can point toward policy solutions that go beyond ―genetic privacy.‖ As genetic information becomes more plentiful and valuable, policies to prevent the misuse of that information will benefit everyone, regardless of race or ethnicity
- …