20 research outputs found

    Self-fulfilling Prophecy in Practical and Automated Prediction

    Get PDF
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing that typical self-fulfilling prophecies arise due to mistakes about the relationship between a prediction and its object. Such mistakes—along with other mistakes in predicting or in the larger practical endeavor—are easily overlooked when the predictions turn out true. Thus we note that self-fulfilling prophecies prompt no error signals; truth shrouds their mistakes from humans and machines alike. Consequently, self-fulfilling prophecies create several obstacles to accountability for the outcomes they produce. We conclude our critique by showing how failures of accountability, and the associated failures to make corrections, explain the connection between self-fulfilling prophecies and feedback loops. By analyzing the complex relationships between accuracy and other evaluatively significant features of predictions, this article sheds light both on the special case of self-fulfilling prophecies and on the ethics of prediction more generally

    Can we learn from hidden mistakes? Self-fulfilling prophecy and responsible neuroprognostic innovation

    Get PDF
    A self-fulfilling prophecy (SFP) in neuroprognostication occurs when a patient in coma is predicted to have a poor outcome, and life-sustaining treatment is withdrawn on the basis of that prediction, thus directly bringing about a poor outcome (viz. death) for that patient. In contrast to the predominant emphasis in the bioethics literature, we look beyond the moral issues raised by the possibility that an erroneous prediction might lead to the death of a patient who otherwise would have lived. Instead, we focus on the problematic epistemic consequences of neuroprognostic SFPs in settings where research and practice intersect. When this sort of SFP occurs, the problem is that physicians and researchers are never in a position to notice whether their original prognosis was correct or incorrect, since the patient dies anyway. Thus, SFPs keep us from discerning false positives from true positives, inhibiting proper assessment of novel prognostic tests. This epistemic problem of SFPs thus impedes learning, but ethical obligations of patient care make it difficult to avoid SFPs. We then show how the impediment to catching false positive indicators of poor outcome distorts research on novel techniques for neuroprognostication, allowing biases to persist in prognostic tests. We finally highlight a particular risk that a precautionary bias towards early withdrawal of life-sustaining treatment may be amplified. We conclude with guidelines about how researchers can mitigate the epistemic problems of SFPs, to achieve more responsible innovation of neuroprognostication for patients in coma

    Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist

    Full text link
    The widespread use of ChatGPT and other emerging technology powered by generative artificial intelligence (GenAI) has drawn much attention to potential ethical issues, especially in high-stakes applications such as healthcare, but ethical discussions are yet to translate into operationalisable solutions. Furthermore, ongoing ethical discussions often neglect other types of GenAI that have been used to synthesise data (e.g., images) for research and practical purposes, which resolved some ethical issues and exposed others. We conduct a scoping review of ethical discussions on GenAI in healthcare to comprehensively analyse gaps in the current research, and further propose to reduce the gaps by developing a checklist for comprehensive assessment and transparent documentation of ethical discussions in GenAI research. The checklist can be readily integrated into the current peer review and publication system to enhance GenAI research, and may be used for ethics-related disclosures for GenAI-powered products, healthcare applications of such products and beyond

    Towards clinical AI fairness: A translational perspective

    Full text link
    Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the issue of fairness remains a concern in high-stakes fields such as healthcare. Despite extensive discussion and efforts in algorithm development, AI fairness and clinical concerns have not been adequately addressed. In this paper, we discuss the misalignment between technical and clinical perspectives of AI fairness, highlight the barriers to AI fairness' translation to healthcare, advocate multidisciplinary collaboration to bridge the knowledge gap, and provide possible solutions to address the clinical concerns pertaining to AI fairness

    The War Within : Battling Polarization, Reductionism, and Superficiality - A critical analysis of truth-telling in war reporting

    No full text
    This thesis analyzes specific challenges concerning 'truth-telling' war reporters face when reporting on international conflict. For this purpose truth is examined in accordance with journalistic principles outlined in codes of ethics, with a focus on objectivity and fairness. The aim is to discover ways to improve the application of principles, in order to battle epistemic errors and the effects they entail: polarization, reductionism, and superficiality. The study concludes that providing context and nuance is crucial, but that codes - although essential - are insufficient in helping journalists decide what is relevant and what is not. An approach in virtue ethics is recommended where phronesis (or practical wisdom) can inspire responsible journalists to comply with the spirit, rather than the letter of the principles

    Responsible Prediction Under Critical Uncertainty: an epistemic analysis of neuroprognostic innovation practices after cardiac arrest

    Get PDF
    The purpose of this research is to make sure that neuroprognostic innovation improves prognostic practices for patients in coma after cardiac arrest while diminishing potential downsides to a minimum. Ensuring responsible research, development, and implementation of the innovative technology demands we identify, analyze, and address the potential challenges of neuroprognostic innovation. The main aim of this research is thus to investigate how to ensure responsible innovation of prognostication in postanoxic coma. The novel use of cEEG in prognosis of postanoxic coma serves as my case study as it is the best object of investigation for answering the main research question. Aside from being the most promising prognostic tool for this patient group, it turns out that many of EEGs challenges are challenges that established as well as future prognostic tools face as well

    Liminal Innovation Practices: questioning three common assumptions in responsible innovation

    Get PDF
    Although the concept of Responsible Innovation (RI) has been applied to different types of innovations, three common assumptions have remained the same. First, emerging technologies require assessment because of their radical novelty and unpredictability. Second, early assessment is necessary to impact the innovation trajectory. Third, anticipation of unknowns is needed to prepare for the unpredictable. I argue that these assumptions do not hold for liminal innovation practices in clinical settings, which are defined by continuous transition on both sides of the threshold between experiment and implementation, and between research and care. First of all, technologies at the center of liminal innovation practices have different characteristics than those typically attributed to emerging technologies. Additionally, the innovation trajectory is significantly different allowing continuous assessment and shaping long after implementation. Finally, these differences demand a reorientation in RI approaches for these cases, away from anticipation of the unknown and uncertain, and returning to observation of the known and predictable

    Deconstructing self-fulfilling outcome measures in infertility treatment

    No full text
    The typical outcome measure in infertility treatment is the (cumulative) healthy live birth rate per patient or per cycle. This means that those who end the treatment trajectory with a healthy baby in their arms are considered to be successful and those who do not are considered to have failed. In this article, we argue that by adopting the healthy live birth standard as the outcome measure that defines a successful fertility treatment, it becomes an interpretative self-fulfilling prophecy: those who achieve the goal consider themselves successful and those who do not consider themselves failures. This is regardless of the fact that having children is only one out of many ways to alleviate the suffering related to infertility and that stopping fertility treatment can also be a positive decision to move on to other goals, rather than a form of "giving up," "dropping out," "nonadherence," or failure. We suggest that those seeking fertility treatment would be served better by an alternative outcome measure, which can be equally self-fulfilling, according to which a successful treatment is one in which people leave the clinic released from the suffering that accompanied their status as infertile when they first entered the clinic. This new outcome measure still implies that walking out with a healthy baby is a positive outcome. What changes is that walking out without a baby can also be a positive outcome, rather than being marked exclusively as a failure

    Chasing Certainty After Cardiac Arrest: Can a Technological Innovation Solve a Moral Dilemma?

    Get PDF
    When information on a coma patient’s expected outcome is uncertain, a moral dilemma arises in clinical practice: if life-sustaining treatment is continued, the patient may survive with unacceptably poor neurological prospects, but if withdrawn a patient who could have recovered may die. Continuous electroencephalogram-monitoring (cEEG) is expected to substantially improve neuroprognostication for patients in coma after cardiac arrest. This raises expectations that decisions whether or not to withdraw will become easier. This paper investigates that expectation, exploring cEEG’s impacts when it becomes part of a socio-technical network in an Intensive Care Unit (ICU). Based on observations in two ICUs in the Netherlands and one in the USA that had cEEG implemented for research, we interviewed 25 family members, healthcare professionals, and surviving patients. The analysis focuses on (a) the way patient outcomes are constructed, (b) the kind of decision support these outcomes provide, and (c) how cEEG affects communication between professionals and relatives. We argue that cEEG can take away or decrease the intensity of the dilemma in some cases, while increasing uncertainty for others. It also raises new concerns. Since its actual impacts furthermore hinge on how cEEG is designed and implemented, we end with recommendations for ensuring responsible development and implementation
    corecore