Columbia University Libraries Journals
Not a member yet
5200 research outputs found
Sort by
How does Generative AI Affect Patients' Rights? A Focus on Privacy, Autonomy, and Justice
Photo by Igor Omilaev on Unsplash
Abstract
Healthcare systems are facing constant changes due to demographic modifications (a rapidly aging population), technological developments, global pandemics, and shifts in social paradigms. These changes are increasingly being analysed through the lens of patients’ rights, which are central in ethical and legal discussions in healthcare. A significant change in healthcare today is the growing use of generative artificial intelligence (AI) in clinical practice. This research analyses the potential risks of the use of generative AI systems to fundamental patients’ rights. With a mixed methodology combining literature review and semi-structured interviews with experts and stakeholders, the study identifies three main areas of risk, each one associated with fundamental values: the right to medical data protection (privacy), the right to equal access to healthcare (justice), and the right to informed consent (autonomy). The report concludes with a discussion of the findings and presents legal and ethical recommendations to promote the benefits of generative AI in healthcare.
1. Introduction
The increasing digitalization of healthcare is reshaping how healthcare professionals deal with clinical tasks and patient interactions. This technological shift is accelerated by systemic pressures that healthcare is facing today due to a double aging population and workforce shortages. Generative artificial intelligence (GenAI) has the capacity to help healthcare providers with clinical documentation, decision-making, and patient communication through automated processes. At the same time, the fast integration of GenAI models in healthcare raises ethical and legal concerns. For example, general-purpose AI models are already being used in clinical practice without being subject to high-risk regulatory requirements. This produces regulatory gaps that challenge the protection of fundamental patients’ rights in real-world clinical settings.
This report focuses on three main patients’ rights: the right to privacy, the right to equitable access, and the right to informed consent. These rights are represented in bioethical and legal frameworks for the protection of patients. The question guiding this study is the following: How does the use of generative AI in healthcare impact patients’ rights, particularly regarding privacy, justice, and autonomy? While the analysis is framed within the EU context, the concepts and findings remain relevant for broader global discussions. By identifying key risks, such as unauthorized access to health data, limitations of anonymization techniques, algorithmic bias, and digital informed consent, this study contributes to the growing body of research on AI in healthcare and the protection of patients’ rights.
2. Context
2.1. What is Generative AI?
Generative artificial intelligence (GenAI) is a broad category of AI that, in addition to recognizing and predicting patterns, can also generate new content such as text, images, and sound, based on input and training data.[1] GenAI differs from traditional AI in two key ways: dynamic context and scale of use. While traditional AI is typically designed for specific contexts and predefined tasks, GenAI has a sort of “flexibility” and “creativity” that allows the model to learn new capabilities that it had never been explicitly trained for, allowing it to adapt to different contexts and uses.[2] In this sense, GenAI is one single tool with multiple uses and applications.[3]
Because of this high adaptability, it is harder to interpret the complex learning algorithms of GenAI, which leads to less transparency of the system. Ultimately, when asking a GenAI model to create an outcome, if asked the same thing twice, it will provide inconsistent outcomes due to its probabilistic nature.
A specific category of GenAI is large language models (LLMs), which are designed to generate human-like text. These models pertain to the class of natural language processing (NLP), the technology that allows computers to understand and process human language (an example would be Google Translate). LLMs are trained on enormous text datasets that allow the model to self-learn and create text on its own.[4]
GenAI has gained significant attention since the release of ChatGPT, a chatbot made publicly available by the American organization OpenAI in 2019. Its ease and free accessibility reached widespread adoption[5] also in healthcare settings.[6]
2.2. Generative AI in Healthcare
In healthcare, traditional AI systems are used in several areas. For example, in radiology, they automate the detection and classification of medical images.[7] In emergency departments and intensive care units (ICUs), AI is used as a decision support system. For example, the Pacmed Critical model at Leiden University Medical Centre (UMC) (Netherlands) is a machine learning model that predicts readmission or death after ICU discharge.[8] AI is also used in patient monitoring to track physiological changes and provide predictive analytics: MS Sherpa is an application for multiple sclerosis that uses digital biomarkers to monitor symptom progression and disease activity.[9]
GenAI offers new possibilities, mainly aimed at reducing administrative burdens, for instance, through automatically creating clinical documents like discharge letters, referral letters, and clinical notes.[10] For example, the UMC Utrecht (Netherlands) has developed an application that uses General Pre-training Transformer (GPT) to generate draft discharge letters.[11] GenAI is also being used to transcribe and summarize conversations between doctor and patient. “Autoscriber,” at Leiden UMC research department (Netherlands), is a digital scribe system that automatically records, transcribes, and summarizes the clinical encounter.[12] Besides administrative tasks, GenAI can assist with clinical decision-making by creating diagnosis and treatment recommendations based on patient data.[13] It also supports medical research activities like assisting in systematic reviews.[14] GenAI is also used to automatically answer patients’ questions related to their care. For example, at the Elizabeth-Twee Steden Hospital (Netherlands), a chatbot called “Eliza” answers patients’ medical questions.[15]
2.3. Current Use of Generative AI in Healthcare
The use of GenAI in healthcare is rapidly increasing, which is changing how healthcare providers manage clinical tasks and patient interactions. Recent empirical studies reveal that more than half of healthcare providers use ChatGPT, or similar general-purpose LLMs, to assist with clinical documentation, patient communication, clinical decision-making, research, and more.[16] These studies also show that despite this widespread use of GenAI, most healthcare providers lack the required knowledge and awareness of the risks of using this tool in general, and specifically for clinical tasks.[17] This lack of comprehension is probably because GenAI has only become popular and widespread recently, which makes it difficult to fully understand and assess the risks and scale of these technologies to society.
This gap in understanding GenAI’s risks is reflected in healthcare institutions. For example, a survey on AI use in Dutch hospitals found that GenAI was used in 57 percent of hospitals, with applications such as automatic transcriptions, document summarisation, and text generation.[18] The same study showed critical issues: in only 29 percent of hospitals, it was clear on what frequency AI models are retested, trained, and calibrated to errors such as hallucinations[19] and data drifting.[20] In more than half of the hospitals (52 percent), it is unknown whether, and if so, in what frequency, such practices occur at all, and in 11 percent, AI models are never retrained. Moreover, only 30 percent of hospitals reported having an AI policy describing the frameworks, standards, and guidelines for the use of AI.[21]
Another survey found that 76 percent of physicians reported using general-purpose LLMs, like ChatGPT, for clinical decision-making.[22] More than 60 percent of primary care doctors reported using them to check drug interactions; while more than half use them for diagnosis support, nearly half for clinical documentation, and more than 40 percent for treatment planning. Additionally, 70 percent use general-purpose LLMs for patient education and literature search.
These findings show a mismatch between the growing use of GenAI in clinical practices and the governance needed to ensure its responsible use. While GenAI has the potential to enhance efficiency and accuracy in clinical tasks, if it is integrated without the necessary knowledge, governance, legal, and ethical oversight, it can lead to harmful consequences to patients, such as data protection violations, automation bias, unclear accountability, healthcare inequality, incorrect clinical decisions, and the spread of misinformation.[23]
2.4. Regulatory Landscape
At the European Union (EU) level, efforts to regulate the safe use of AI in healthcare are currently fragmented. This means there is not one regulatory framework solely dedicated to governing the use of AI in healthcare. Instead, different laws cover different parts of the issue, including the European Union AI Act,[24] the General Data Protection Regulation,[25] and the Medical Devices Regulation.[26]
2.4.1. The European Union AI Act
In August 2024, the Artificial Intelligence (AI) Act entered into force. The AI Act is an EU regulation that sets rules for the development, introduction to the market, and deployment of AI systems. It adopts a risk-based approach: depending on the application and use of the system, it will fall under low, middle, high, or impermissible risk. The higher the risk, the stricter the regulatory requirements (e.g., risk management, data governance, human oversight).[27]
Medical devices like AI diagnostic tools are classified as high-risk systems due to their direct implications for health outcomes. On the contrary, the majority of GenAI systems, like ChatGPT, fall under the category of general-purpose AI systems, which means that they can be classified both as high-risk and low-risk, depending on their application.[28] Therefore, the actual risk of the GenAI system depends on how and where it is used.[29]
Large GenAI systems (like ChatGPT, Bard, DALL-E) are considered to pose systemic risks due to their widespread adoption; however, they are not always classified as high-risk applications.[30] This means that, in practice (as previously shown), a healthcare provider can and does use these systems for clinical tasks without the systems being under the requirements of high-risk medical devices. While these tools are fast and have access to vast amounts of data, they are relatively new, freely available, and not specifically designed or trained for medical use. Without the appropriate oversight and awareness, it creates the potential for unacceptable risks to patient care.
Moreover, the AI Act is presented as a horizontal regulation, which means that it applies across all sectors and industries rather than focusing on the unique needs, risks, and ethical concerns of the healthcare sector.[31] As argued later, the increasing use of digital healthcare presents new risks to patients’ rights, which will require additional and tailored protections.
2.4.2. Medical Device Regulation
The Medical Device Regulation (MDR) is an EU-binding document that governs the use of devices in clinical settings. It is also risk-based, depending on the intended purpose.
The MDR provides strict rules for GenAI systems intended for clear medical purposes, such as diagnosis. However, not all applications of GenAI are considered medical devices under the MDR, even when used for clinical tasks.[32] For example, when GenAI is used for facilitating communication between patients and practitioners, summarizing clinical reports, or generating referral letters, it is not defined as a medical purpose; therefore, they do not fall under the MDR regulation. Consequently, if healthcare providers use GenAI for such “non-medical purposes,” there is no regulatory guidance on critical issues like patient privacy and legal responsibilities.[33]
GenAI systems are highly adaptable and can be used for many different purposes. Because of this versatility, the MDR and similar regulations based on defined intended purposes face particular challenges. Many GenAI systems, such as ChatGPT, are not specifically designed for medical settings, although healthcare providers use them for clinical tasks. This leads to a regulatory gap: the technology is being used in practice but lacks adequate regulation. This lack of regulation does not ensure the trustworthiness of these tools in clinical settings and poses unacceptable risks to patients’ rights.
2.4.3. General Data Protection Regulation
The use of GenAI in healthcare settings often involves dealing with large volumes of sensitive data like medical records, scan images, and lab results. The management of this data is regulated by the General Data Protection Regulation (GDPR), an EU regulatory framework to protect data privacy. The GDPR classifies health data as a special category of sensitive information that requires additional protections. It grants data subjects with specific rights, including the right to informed consent, the right to access the data, the right to rectification, and the right to be forgotten.[34] Patient data falls under this category, and the GDPR provides strong protections, enabling the reinforcement of the principle of medical confidentiality by limiting the use and amount of such data strictly to the purpose of direct care. In practice, this means that a hospital cannot use patient data for training an AI algorithm or share it with an external vendor without obtaining explicit informed consent or meeting a legal exemption.
While the GDPR is clear for GenAI systems that are developed by the healthcare organisation itself, it becomes challenging for general-purpose GenAI systems, like ChatGPT, where the influence of the GDPR is less powerful compared to models explicitly designed to process personal data.[35] This creates a regulatory grey area for the use of general-purpose GenAI systems in healthcare settings regarding compliance with sensitive data protection standards.
3. Patients’ Rights
Healthcare systems are facing changes constantly (rapidly ageing population, scientific and technological developments, global pandemics, shifts in social paradigms, etc.). These changes are increasingly being analysed through the lens of patients’ rights.[36]
A significant change in healthcare today is the growing use of AI in medical tasks. This technological shift is likely to change traditional patients’ rights into what may soon be recognized as digital patients’ rights.[37]
The field of patients’ rights lies at the intersection of ethics and health law, bringing together moral imperatives and legal protections. Patients’ rights are a special category of human rights aimed at protecting the dignity of the individual who is in a vulnerable state of illness.[38] Since nearly all humans become patients,[39] and patients are among the most vulnerable groups in society,[40] their rights are uniquely defined and crucially important.[41]
The position of the patient is especially vulnerable because of their illness, which can cause insecurity and fear. Moreover, the patient is in an unbalanced position compared to the doctor, who is learned, skilled, and experienced in the topics in which the patient often knows little or nothing about, and still are extremely important for the patient, since their health may depend on them.[42] Besides this information asymmetry, the interaction between patient-practitioner is of a critical and private nature, which leaves the patient to highly depend on the practitioner in order to obtain adequate assistance.[43] This imbalance creates an easy potential for abuse of power (intentional or not) and shows why it is necessary to give special attention to protecting the patient.
3.1. Legal Protection for Patients’ Rights
Over the past decades, patients’ rights have been recognized in a variety of different documents (Declarations, Charters, Laws) at the international, regional, and national levels.[44] Examples of these regulatory efforts include:
European Convention of Human Rights and Fundamental Freedoms (1950)
International Covenant on Civil and Political Rights (1966)
A declaration on the promotion of patients’ rights in Europe (WHO, 1994)
Declaration of Lisbon of the World Medical Association (1995)
Wet op de Geneeskundige Behandelingsovereenkomst (WGBO) (Medical Treatment Agreement Act) (1995)
Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine (Oviedo Convention) (1997)
European Charter of Patient Rights (2002)
These documents are crucial to provide a framework to protect the dignity, freedom, self-determination, and respect of patients. However, the fragmented nature of patients’ rights creates a complex landscape that can be challenging for both patients and healthcare providers to navigate. As Herranz notes,[45] these documents are highly diverse and target different audiences; some are universal, others regional or national in scope. While this diversity reflects the growing importance of patients’ rights globally, it also creates a fragmented landscape. Both patients and healthcare providers may find it difficult to understand the specific rights and obligations that apply in their context due to the diverse and spread nature of these rights between jurisdictions.[46]
4. Methodology
4.1. Study Design and Population
This qualitative study consisted of semi-structured interviews with key experts and stakeholders. Stakeholder mapping was conducted through a document desk review. This process identified five relevant stakeholder groups: (1) patients, (2) healthcare providers, (3) healthcare organizations, (4) AI & Data experts, and (5) Ethical & Legal experts.
Participants were selected based on the following criteria: being 18 years or older, having the capacity to give informed consent, being knowledgeable about the use of AI in healthcare (this criterion did not apply to patient participants), and the ability to communicate in English. Each interview began with a short case study to provide participants with a concrete scenario to consider while answering the questions. Presenting a case study enabled a more focused discussion and helped participants reflect on specific risks.[47]
4.2. Tools
Member States of the European Union (EU) do not share a single binding document to protect patients’ rights. Instead, they are diversified in multiple pieces of legislation. Although these rights are widely recognized in the EU, each country applies its own medical regulations depending on its context and traditional norms.[48] However, it is possible to identify a set of fundamental patients’ rights that are widely recognised across all EU Member States.[49] This study focused on three of these fundamental patients’ rights to guide the development of interview questions. These rights were selected based on existing European frameworks (including the European Convention on Human Rights; the Charter of Fundamental Rights of the European Union; and the European Convention on Human Rights and Biomedicine, or the Oviedo Convention), as well as Dutch legislation (Burgerlijk Wetboek Boek). The selected rights include: (1) The right to autonomy & informed consent of the patient: Patients must be able to make informed decisions about their care,[50] (2) The right to privacy & medical data protection: Personal health data must be kept secure and confidential,[51] and (3) The right to access to healthcare & non-discrimination: Care must be accessible to all, regardless of background, and without unfair barriers.[52]
This research also draws on the classic bioethical framework proposed by Tom Beauchamp and James Franklin Childress[53] to identify ethical guidelines that can support the responsible use of AI in healthcare and help safeguard those patients’ rights. The principles include: (1) Principle of justice: In healthcare ethics, justice refers to the concept of distributive justice, where all patients must be treated equally. This means every patient should receive the same quality of care (offering a uniform standard of quality) regardless of who they are. Persons with greater levels of need should be entitled to greater healthcare services when there is no discernible direct injury to others with lesser levels of need. In the context of AI, this raises important questions: Is access to AI-driven healthcare tools equitable? Are certain groups being left behind due to cost, location, or bias in algorithms? Justice also requires rejecting discrimination and ensuring that health technologies are available to all who need them. This principle is also public and legislated; (2) Principle of non-maleficence: This principle means “do no harm.” It is rooted in the Hippocratic tradition and updated in modern medicine to include preventing harm from unnecessary medical interventions (quaternary prevention). When applied to AI, it asks: Could the use of AI lead to the misdiagnosis of a patient, reinforce bias, or erode trust in care? If AI tools cause harm through poor design, overreliance, or misuse they can breach this core ethical obligation. It is a principle of the public sphere and non-compliance is punishable by law; (3) Principle of autonomy: Respect for aut
Evaluation of the Impact of the 1.5 MAX Initiative on Climate Change Education (CCE) in Malawi Secondary Schools: An Education for Sustainable Development Framework Approach
This qualitative case study evaluates the impact of the 1.5 MAX initiative on Climate Change Education (CCE) in Malawian secondary schools through the dual lens of Education for Sustainable Development and decolonial theory. Malawi’s curricula prioritize Western agricultural models over Indigenous knowledge, resulting in fragmented implementation due to teacher training gaps, resource shortages, and a stark divide between students’ climate knowledge and actionable engagement. While the 1.5 MAX initiative enhances climate awareness and practical skills through interactive methods, its effectiveness is constrained by limited teacher preparedness, curricular misalignment, and systemic resource limitations. The research highlights the importance of integrating Indigenous knowledge and adapting content to local contexts for greater relevance and effectiveness. By applying a decolonial lens, this research critiques the dominance of Western epistemologies in global educational initiatives and advocates for the co-creation of knowledge that centers local agency and context-specific solutions. While demonstrating the potential of international educational initiatives to complement local curricula, the study underscores the need for sustainable support systems and expanded teacher training. Future research should assess the long-term impacts of such interventions and explore strategies for aligning global practices with local needs, while dismantling colonial legacies to foster a more equitable and inclusive educational landscape
Understanding the Threat of Victor’s Justice: The Case of Transitional Justice in Post-Genocide Rwanda
The Rwandan genocide of 1994 remains a chilling reminder of the depths of cruelty and violence that humans can inflict upon one another. While Rwanda has since emerged as a symbol of successful post-conflict recovery, the scars of the genocide continue to fester beneath the surface. This paper delves into the concept of Victors' Justice in the context of the Rwandan genocide and the Transitional Justice efforts that followed, with a specific focus on the actions of the International Criminal Tribunal for Rwanda (ICTR) and the Rwandan Patriotic Front (RPF). Victors' Justice, a term fraught with ethical implications, emerges as a central theme in this analysis, highlighting how it manifested in the proceedings and outcomes of the ICTR. Employing a theoretical approach and drawing upon the work of experts in the field, this research rigorously examines the dynamics of Victors' Justice and its enduring impact on Rwandan society.
At first, the paper establishes the foundational concepts of Victors' Justice and Transitional Justice, tracing their historical roots and relevance to the ICTR. Then, by providing the historical context for the Rwandan genocide, it elucidates the complex power dynamics leading up to the massacre and establishing the International Criminal Tribunal. Furthermore, it delves into the accusations of Victors' Justice, analyzing the actions of the RPF during 1994, its interference with the ICTR's operations, and the injustices witnessed in national courts. Finally, it explores the challenges of Transitional Justice and Social Reconciliation in Rwanda, including restrictions on freedom of expression, persecution of political opposition, and mechanisms of social control.
This paper synthesizes the findings and data accumulated throughout the study. It offers recommendations to address the social and ethnic divisions that persist in Rwanda, emphasizing accountability, political freedom, and the significance of historical narratives in fostering true reconciliation. This research contributes to a deeper understanding of the complex dynamics in post-genocidal societies and the implications of Victors' Justice for pursuing lasting peace and Justice
New Jersey Teachers’ Professional Learning About Climate Change
During the 2022-23 academic year, New Jersey became the first state in the United States to adopt learning standards that support climate change education K-12 across all subject areas, offering an ideal context for exploring the relationship between education and climate change. Although New Jersey has provided financial funding to support teachers in teaching about climate change, little is known about teachers’ preparedness to implement developmentally appropriate climate change instruction in K-12 settings. This study utilizes interviews from 50 New Jersey teachers who participated in a classroom observation study conducted during the 2023-24 academic year to describe their professional learning related to climate change. Though professional learning varied considerably across the dataset, most respondents indicated that self-directed learning was their primary mode of professional development about climate change, followed by attendance at workshops or webinars. Several participants reported having no access to professional development provided by their school or district on the topic, despite the introduction of standards. When asked about plans for future professional development related to climate change, the majority of interviewees asserted that they had plans, but these varied with their grade bands. The findings suggest that more coherent professional learning opportunities are needed to support teachers in integrating climate change into their teaching. More mechanisms should be implemented to acknowledge teachers’ self-directed learning on climate change
Replacing Seclusion & Restraint Practices in Psychiatry With Sensory Rooms
The use of seclusion and restraint (S/R) in acute psychiatric inpatient settings persists as a controversial practice, causing significant harm to patients and stress to staff. This policy brief examines the ethical, financial, and systemic implications of S/R and advocates for replacing S/R with sensory rooms—an evidence-based approach fostering emotion regulation, patient autonomy, and trauma-informed care. Recognizing that eliminating S/R may not be immediately feasible, this brief proposes an incremental approach through a hypothetical pilot program at Jackson Behavioral Health Hospital: converting an isolation room, or a room where a patient receives intervention separately from other patients, on each psychiatric inpatient unit into a sensory room, alongside incentives to reduce overall S/R usage. Sensory rooms can then be evaluated as a humane and cost-effective alternative to S/R practices. This policy brief aims to advance knowledge on patient-centered interventions in mental health care and underscores the ethical imperatives and financial incentives for legislative and organizational policy reform in psychiatric care.
Keywords: seclusion, restraint, sensory rooms, psychiatric inpatient care, policy reform, trauma-informed care, social justic
Merger Law is Not — and Should not Be — In a Time Capsule: Andrew Finch
The NYSBA 2024 William Howard Taft Lectur
Beyond Income and Education: Unveiling the True Catalysts of Green Behavior in Pakistan and South Asia: A Demand-Side Analysis
There is extensive literature on the progress of green alternatives in Pakistan, but there is no evaluation of how the people of Pakistan will respond to these proposed solutions. After conducting a literature review on green alternatives, this paper employs the Theory of Planned Behavior (TPB) framework. It utilizes data from the World Values Survey (WVS) in conjunction with logistic regression to assess the viability of sustainable practices in Pakistan and whether specific demographic groups, such as women, highly educated individuals, and high-income citizens, exhibit a greater inclination to adopt sustainable practices. Our regression analysis indicates that people’s income, religiosity level, and age do not affect their likelihood of adopting sustainable practices. In contrast, their attitude towards free market ideology, self-provision, and cultural values such as power distance and global connectedness have a significant impact. The paper shows Pakistan’s education system does not instill environmental values in people like other South Asian systems. Women in South Asia are less likely to adopt sustainable practices than men. These findings offer valuable insights for policymakers and financial institutions, guiding a nuanced restructuring of green alternative approaches in Pakistan and South Asia.
Effects of Anxiety on Attention-Based Tasks in a College Population
Previous literature suggests that trait anxiety may lead to diminished global processing, and therefore, a local processing bias (Basso et al., 1996), which may contribute to a narrowed scope of attention and impaired cognitive flexibility. Additionally, there is conflicting data on how anxiety interacts with performance on the Stroop task (e.g., Ursache & Cybele Raver, 2014). To understand this relationship, the authors used the State-Trait Anxiety Inventory (STAI) to divide participants into groups based on their levels of anxiety. Specifically, the researchers explored the effects of state and trait anxiety on college students’ attention using the Navon task and the Stroop task. The Navon task was used to compare the performance of people with high and low trait anxiety, utilizing two t-tests to analyze local and global processing. Four groups were created for the Stroop task: high trait/low state, low state/high trait, high trait/high state, and low state/low trait, which were compared through an ANOVA. No statistically significant differences were found in performance on the Stroop and Navon tasks based on state or trait anxiety. This may be due to the age range of participants and the lack of clinical elevation of these factors. The findings suggest that moderate levels of anxiety may not impact attention drastically in a college population