20 research outputs found

    Data as symptom: Doctors’ responses to patient-provided data in general practice

    Get PDF
    People are increasingly able to generate their own health data through new technologies such as wearables and online symptom checkers. However, generating data is one thing, interpreting them another. General practitioners (GPs) are likely to be the first to help with interpretations. Policymakers in the European Union are investing heavily in infrastructures to provide GPs access to patient measurements. But there may be a disconnect between policy ambitions and the everyday practices of GPs. To investigate this, we conducted semi-structured interviews with 23 Danish GPs. According to the GPs, patients relatively rarely bring data to them. GPs mostly remember three types of patient-generated data that patients bring to them for interpretation: heart and sleep measurements from wearables and results from online symptom checkers. However, they also spoke extensively about data work with patient queries concerning measurements from the GPs’ own online Patient Reported Outcome system and online access to laboratory results. We juxtapose GP reflections on these five data types and between policy ambitions and everyday practices. These data require substantial recontextualization work before the GPs ascribe them evidential value and act on them. Even when they perceived as actionable, patient-provided data are not approached as measurements, as suggested by policy frameworks. Rather, GPs treat them as analogous to symptoms—that is to say, GPs treat patient-provided data as subjective evidence rather than authoritative measures. Drawing on Science and Technology Studies (STS) literature,we suggest that GPs must be part of the conversation with policy makers and digital entrepreneurs around when and how to integrate patient-generated data into healthcare infrastructures

    Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

    Get PDF
    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.</jats:p

    On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

    Get PDF
    Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection¼ to identify specific challenges and potential ethical trade-offs when we consider AI in practice.</jats:p
    corecore