6 research outputs found
AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians' and midwives' perspectives on integrating AI-driven CTG into clinical decision making
Background: Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG must fulfill for clinicians to feel justified in incorporating this assistive technology into their decision-making processes regarding interventions in labor.
Methods: This study is based on semi-structured interviews conducted online with eight obstetricians and five midwives based in England. Participants were asked about their current decision-making processes about when to intervene in labor, how AI-driven CTG might enhance or disrupt this process, and what it would take for them to trust this kind of technology. Interviews were transcribed verbatim and analyzed with thematic analysis. NVivo software was used to organize thematic codes that recurred in interviews to identify the issues that mattered most to participants. Topics and themes that were repeated across interviews were identified to form the basis of the analysis and conclusions of this paper.
Results: There were four major themes that emerged from our interviews with obstetricians and midwives regarding the conditions that AI-driven CTG must fulfill: (1) the importance of accurate and efficient risk assessments; (2) the capacity for personalization and individualized medicine; (3) the lack of significance regarding the type of institution that develops technology; and (4) the need for transparency in the development process.
Conclusions: Accuracy, efficiency, personalization abilities, transparency, and clear evidence that it can improve outcomes are conditions that clinicians deem necessary for AI-DSS to meet in order to be considered reliable and therefore worthy of being incorporated into the decision-making process. Importantly, healthcare professionals considered themselves as the epistemic authorities in the clinical context and the bearers of responsibility for delivering appropriate care. Therefore, what mattered to them was being able to evaluate the reliability of AI-DSS on their own terms, and have confidence in implementing them in their practice
Bridging the Digital Divide: The UNBIASED national study to unravel the impact of ethnicity and deprivation on diabetes technology disparities in the United Kingdom
While diabetes technology offers significant clinical and quality-of-life benefits to people with type 1 diabetes, persistent inequalities in technology use based on ethnicity and deprivation are becoming increasingly evident. To date, there is limited research into the challenges and barriers to accessing and using diabetes technology and concerns felt by end-users from racially minoritised and socioeconomically disadvantaged groups. Their views are often under-represented in the literature, and healthcare professionalsâ perspectives on barriers to technology access have also been neglected. This article explores the nuanced relationship between ethnicity, socioeconomic status, and technology access. By understanding the parallels between health and technology inequalities, we can pave the way for targeted interventions to bridge the digital gap and create a more inclusive technological landscape. The UNBIASED study is currently being conducted across England, and is exploring the lived experiences of under-represented children and young people with type 1 diabetes regarding the (lack of) utilisation of life-changing diabetes technologies. The study is also consulting healthcare professionals who can act as gatekeepers to technology, with the ultimate goal of identifying and dismantling existing barriers and inequities to access. By synthesising the perspectives of both people with type 1 diabetes and healthcare providers, this research seeks to develop inclusive, practical, and implementable solutions to foster improved access to cutting-edge diabetes technologies within the National Health Service (NHS)
Another world is possible: reconceptualizing the âsafe spaceâ metaphor at a feminist safer space in New York City
My thesis seeks to reconceptualize the âsafe spaceâ metaphor at a self-described âsafer spaceâ in New York City, Pocketbooks. Typically understood as places of comfort, safe spaces are often disparaged for encouraging emotional fragility and stymying intellectual growth. However, the potential of sites like these to offer cultural critique and provoke new ways of thinking about safety (and violence) has been overlooked. Pocketbooks, a feminist bookstore in a hyper-gentrifying neighborhood of Manhattan, is one such place. Although Mayor Giulianiâs âquality of lifeâ laws are often credited for making NYC the safest it has ever been, Pocketbooks positions itself as a âsafer spaceâ from a city (and society) that has become increasingly unsafe.
With a view to (re)conceptualizing safe spaces and interrogating the meaning(s) of safety in neoliberal America, my thesis posits the following ethnographic question: How does the Pocketbooks community (re)imagine, negotiate, and enact notions of safety to build a micro-society that is, in their words, âequitable, cooperative, and free?â Integrating fifteen months of fieldwork with a variety of literaturesâincluding but not limited to the anthropologies of violence and counterpublicsâI attempt to reconceptualize the safe space metaphor. I argue that Pocketbooks act as a feminist counterpublic whereby people of subordinated and sociospatially-excluded populations can find community and self-expression, acquire skills to reduce the extent of their marginalization elsewhere, develop language to articulate their identities and experiences, and even unlearn violent habitus. Moreover, in an attempt to translate utopian imaginaries into practice, Pocketbooks becomes a place to think from and about safety, and a place to consider how to forge justice in an unequal world. My research challenges the dominant discourse that neglects the structural nature of violence and overlooks safe spaces as creative sites of cultural critique and production, contestation, and hope
Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care
Abstract Background Despite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness. Methods Seventeen semi-structured interviews were conducted with birth parents and mothers based on a speculative case study. Interviewees were based in England and were pregnant and/or had given birth in the last two years. Thematic analysis was used to analyze transcribed interviews with the use of NVivo. Major recurring themes acted as the basis for identifying the values most important to this population group for evaluating the trustworthiness of AI. Results Three themes pertaining to the perceived trustworthiness of AI emerged from interviews: (1) trustworthy AI-developing institutions, (2) trustworthy data from which AI is built, and (3) trustworthy decisions made with the assistance of AI. We found that birth parents and mothers trusted public institutions over private companies to develop AI, that they evaluated the trustworthiness of data by how representative it is of all population groups, and that they perceived trustworthy decisions as being mediated by humans even when supported by AI. Conclusions The ethical values that underscore birth parents and mothersâ perceptions of trustworthy AI include fairness and reliability, as well as practices like patient-centered care, the promotion of publicly funded healthcare, holistic care, and personalized medicine. Ultimately, these are also the ethical values that people want to protect in the healthcare system. Therefore, trustworthy AI is best understood not as a list of design features but in relation to how it undermines or promotes the ethical values that matter most to its end users. An ethical commitment to these values when creating AI in healthcare contexts opens up new challenges and possibilities for the design and deployment of AI
Recommended from our members
Bridging the digital divide: The UNBIASED national study to unravel the impact of ethnicity and deprivation on diabetes technology disparities in the United Kingdom.
While diabetes technology offers significant clinical and quality-of-life benefits to people with type 1 diabetes, persistent inequalities in technology use based on ethnicity and deprivation are becoming increasingly evident. To date, there is limited research into the challenges and barriers to accessing and using diabetes technology and concerns felt by end-users from racially minoritised and socioeconomically disadvantaged groups. Their views are often under-represented in the literature, and healthcare professionalsâ perspectives on barriers to technology access have also been neglected. This article explores the nuanced relationship between ethnicity, socioeconomic status, and technology access. By understanding the parallels between health and technology inequalities, we can pave the way for targeted interventions to bridge the digital gap and create a more inclusive technological landscape. The UNBIASED study is currently being conducted across England, and is exploring the lived experiences of under-represented children and young people with type 1 diabetes regarding the (lack of) utilisation of life-changing diabetes technologies. The study is also consulting healthcare professionals who can act as gatekeepers to technology, with the ultimate goal of identifying and dismantling existing barriers and inequities to access. By synthesising the perspectives of both people with type 1 diabetes and healthcare providers, this research seeks to develop inclusive, practical, and implementable solutions to foster improved access to cutting-edge diabetes technologies within the National Health Service (NHS).Diabetes UK for funding the study.
The University of Cambridge has received salary support for ME through the National Health Service in the East of England through the Clinical Academic Reserve