34 research outputs found

    Solidarity and reciprocity during the COVID-19 pandemic: a longitudinal qualitative interview study from Germany

    Get PDF
    Background: While solidarity practices were important in mitigating the Coronavirus Disease 2019 (COVID-19) pandemic, their limits became evident as the pandemic progressed. Taking a longitudinal approach, this study analyses German residents’ changing perceptions of solidarity practices during the COVID-19 pandemic and examines potential reasons for these changes. Methods: Adults living in Germany were interviewed in April 2020 (n = 46), October 2020 (n = 43) and October 2021 (n = 40) as part of the SolPan Research Commons, a large-scale, international, qualitative, longitudinal study uniquely situated in a major global public health crisis. Interviews were analysed using qualitative content analysis. Results: While solidarity practices were prominently discussed and positively evaluated in April 2020, this initial enthusiasm waned in October 2020 and October 2021. Yet, participants still perceived solidarity as important for managing the pandemic and called for institutionalized forms of solidarity in October 2020 and October 2021. Reasons for these changing perceptions of solidarity included (i) increasing personal and societal costs to act in solidarity, (ii) COVID-19 policies hindering solidarity practices, and (iii) a perceived lack of reciprocity as participants felt that solidarity practices from the state were not matching their individual efforts. Conclusions: Maintaining solidarity contributes to maximizing public health during a pandemic. Institutionalized forms of solidarity to support those most in need contribute to perceived reciprocity among individuals, which might increase their motivation to act in solidarity. Thus, rather than calling for individual solidarity during times of crisis, authorities should consider implementing sustaining solidarity-based social support systems that go beyond immediate crisis management

    Unmet Needs in Children With Attention Deficit Hyperactivity Disorder—Can Transcranial Direct Current Stimulation Fill the Gap? Promises and Ethical Challenges

    Get PDF
    Attention deficit hyperactivity disorder (ADHD) is a disorder most frequently diagnosed in children and adolescents. Although ADHD can be effectively treated with psychostimulants, a significant proportion of patients discontinue treatment because of adverse events or insufficient improvement of symptoms. In addition, cognitive abilities that are frequently impaired in ADHD are not directly targeted by medication. Therefore, additional treatment options, especially to improve cognitive abilities, are needed. Because of its relatively easy application, well-established safety, and low cost, transcranial direct current stimulation (tDCS) is a promising additional treatment option. Further research is needed to establish efficacy and to integrate this treatment into the clinical routine. In particular, limited evidence regarding the use of tDCS in children, lack of clear translational guidelines, and general challenges in conducting research with vulnerable populations pose a number of practical and ethical challenges to tDCS intervention studies. In this paper, we identify and discuss ethical issues related to research on tDCS and its potential therapeutic use for ADHD in children and adolescents. Relevant ethical issues in the tDCS research for pediatric ADHD center on safety, risk/benefit ratio, information and consent, labeling problems, and nonmedical use. Following an analysis of these issues, we developed a list of recommendations that can guide clinicians and researchers in conducting ethically sound research on tDCS in pediatric ADHD

    Improving shared decision-making about cancer treatment through design-based data-driven decision-support tools and redesigning care paths:an overview of the 4D PICTURE project

    Get PDF
    Background: Patients with cancer often have to make complex decisions about treatment, with the options varying in risk profiles and effects on survival and quality of life. Moreover, inefficient care paths make it hard for patients to participate in shared decision-making. Data-driven decision-support tools have the potential to empower patients, support personalized care, improve health outcomes and promote health equity. However, decision-support tools currently seldom consider quality of life or individual preferences, and their use in clinical practice remains limited, partly because they are not well integrated in patients' care paths.Aim and objectives: The central aim of the 4D PICTURE project is to redesign patients' care paths and develop and integrate evidence-based decision-support tools to improve decision-making processes in cancer care delivery. This article presents an overview of this international, interdisciplinary project.Design, methods and analysis: In co-creation with patients and other stakeholders, we will develop data-driven decision-support tools for patients with breast cancer, prostate cancer and melanoma. We will support treatment decisions by using large, high-quality datasets with state-of-the-art prognostic algorithms. We will further develop a conversation tool, the Metaphor Menu, using text mining combined with citizen science techniques and linguistics, incorporating large datasets of patient experiences, values and preferences. We will further develop a promising methodology, MetroMapping, to redesign care paths. We will evaluate MetroMapping and these integrated decision-support tools, and ensure their sustainability using the Nonadoption, Abandonment, Scale-Up, Spread, and Sustainability (NASSS) framework. We will explore the generalizability of MetroMapping and the decision-support tools for other types of cancer and across other EU member states.Ethics: Through an embedded ethics approach, we will address social and ethical issues.Discussion: Improved care paths integrating comprehensive decision-support tools have the potential to empower patients, their significant others and healthcare providers in decision-making and improve outcomes. This project will strengthen health care at the system level by improving its resilience and efficiency.Improving the cancer patient journey and respecting personal preferences: an overview of the 4D PICTURE projectThe 4D PICTURE project aims to help cancer patients, their families and healthcare providers better undertstand their options. It supports their treatment and care choices, at each stage of disease, by drawing on large amounts of evidence from different types of European data. The project involves experts from many different specialist areas who are based in nine European countries. The overall aim is to improve the cancer patient journey and ensure personal preferences are respected

    Democratic research: Setting up a research commons for a qualitative, comparative, longitudinal interview study during the COVID-19 pandemic

    Get PDF
    The sudden and dramatic advent of the COVID-19 pandemic led to urgent demands for timely, relevant, yet rigorous research. This paper discusses the origin, design, and execution of the SolPan research commons, a large-scale, international, comparative, qualitative research project that sought to respond to the need for knowledge among researchers and policymakers in times of crisis. The form of organization as a research commons is characterized by an underlying solidaristic attitude of its members and its intrinsic organizational features in which research data and knowledge in the study is shared and jointly owned. As such, the project is peer-governed, rooted in (idealist) social values of academia, and aims at providing tools and benefits for its members. In this paper, we discuss challenges and solutions for qualitative studies that seek to operate as research commons

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Blood Donation, Payment, and Non-Cash Incentives: Classical Questions Drawing Renewed Interest

    No full text
    Blood is scarce, and ensuring a sufficient blood supply remains difficult for many countries. Payment for blood as a strategy to increase donations has remained highly controversial for decades, and the debate about ethical issues in paying donors has become somewhat stuck. At least from a policy perspective, it is important to find a compromise which allows for devising and implementing acceptable and successful policies to increase the blood supply. In this paper, such a compromise is developed both from a theoretical and empirical perspective, namely implementing well-designed non-cash incentives which cut across the rigid dichotomy of altruistic donations versus payment for donations. In order for this compromise to work, more attention to donation motives, the choice architecture, and the setting in blood donation needs to be paid
    corecore