27,771 research outputs found

    An ‘Ethical Black Box’, Learning From Disagreement in Shared Control Systems

    Get PDF
    Shared control, where a human user cooperates with an algorithm to operate a device, has the potential to greatly expand access to powered mobility, but also raises unique ethical challenges. A shared-control wheelchair may perform actions that do not reflect its user’s intent in order to protect their safety, causing frustration or distrust in the process. Unlike physical accidents there is currently no framework for investigating or adjudicating these events, leading to a reduced capability to improve the shared control algorithm’s user experience. In this paper we suggest a system based on the idea of an ‘ethical black box’ that records the sensor context of sub-critical disagreements and collision risks in order to allow human investigators to examine them in retrospect and assess whether the algorithm has taken control from the user without justification

    Emotion, deliberation, and the skill model of virtuous agency

    Get PDF
    A recent skeptical challenge denies deliberation is essential to virtuous agency: what looks like genuine deliberation is just a post hoc rationalization of a decision already made by automatic mechanisms (Haidt 2001; Doris 2015). Annas’s account of virtue seems well-equipped to respond: by modeling virtue on skills, she can agree that virtuous actions are deliberation-free while insisting that their development requires significant thought. But Annas’s proposal is flawed: it over-intellectualizes deliberation’s developmental role and under-intellectualizes its significance once virtue is acquired. Doing better requires paying attention to a distinctive form of anxiety—one that functions to engage deliberation in the face of decisions that automatic mechanisms alone cannot resolve

    A library of logic models to explain how interventions to reduce diagnostic error work

    Get PDF
    OBJECTIVES: We aimed to create a library of logic models for interventions to reduce diagnostic error. This library can be used by those developing, implementing, or evaluating an intervention to improve patient care, to understand what needs to happen, and in what order, if the intervention is to be effective. METHODS: To create the library, we modified an existing method for generating logic models. The following five ordered activities to include in each model were defined: preintervention; implementation of the intervention; postimplementation, but before the immediate outcome can occur; the immediate outcome (usually behavior change); and postimmediate outcome, but before a reduction in diagnostic errors can occur. We also included reasons for lack of progress through the model. Relevant information was extracted about existing evaluations of interventions to reduce diagnostic error, identified by updating a previous systematic review. RESULTS: Data were synthesized to create logic models for four types of intervention, addressing five causes of diagnostic error in seven stages in the diagnostic pathway. In total, 46 interventions from 43 studies were included and 24 different logic models were generated. CONCLUSIONS: We used a novel approach to create a freely available library of logic models. The models highlight the importance of attending to what needs to occur before and after intervention delivery if the intervention is to be effective. Our work provides a useful starting point for intervention developers, helps evaluators identify intermediate outcomes, and provides a method to enable others to generate libraries for interventions targeting other errors

    One Size Does Not Fit All: Meeting the Health Care Needs of Diverse Populations

    Get PDF
    Proposes a framework for meeting patients' cultural and linguistic needs: policies and procedures that support cultural competence, data collection, population-tailored services, and internal and external collaborations. Includes a self-assessment tool

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Full text link
    Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems

    Operator-based approaches to harm minimisation in gambling: summary, review and future directions

    Get PDF
    In this report we give critical consideration to the nature and effectiveness of harm minimisation in gambling. We identify gambling-related harm as both personal (e.g., health, wellbeing, relationships) and economic (e.g., financial) harm that occurs from exceeding one’s disposable income or disposable leisure time. We have elected to use the term ‘harm minimisation’ as the most appropriate term for reducing the impact of problem gambling, given its breadth in regard to the range of goals it seeks to achieve, and the range of means by which they may be achieved. The extent to which an employee can proactively identify a problem gambler in a gambling venue is uncertain. Research suggests that indicators do exist, such as sessional information (e.g., duration or frequency of play) and negative emotional responses to gambling losses. However, the practical implications of requiring employees to identify and interact with customers suspected of experiencing harm are questionable, particularly as the employees may not possess the clinical intervention skills which may be necessary. Based on emerging evidence, behavioural indicators identifiable in industryheld data, could be used to identify customers experiencing harm. A programme of research is underway in Great Britain and in other jurisdiction
    corecore