15 research outputs found
Digital endpoints in clinical trials: emerging themes from a multi-stakeholder Knowledge Exchange event
© The Author(s) 2024. This article is licensed under a Creative Commons Attribution 4.0 International License, to view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.Background: Digital technologies, such as wearable devices and smartphone applications (apps), can enable the decentralisation of clinical trials by measuring endpoints in people’s chosen locations rather than in traditional clinical settings. Digital endpoints can allow high-frequency and sensitive measurements of health outcomes compared to visit-based endpoints which provide an episodic snapshot of a person’s health. However, there are underexplored challenges in this emerging space that require interdisciplinary and cross-sector collaboration. A multi-stakeholder Knowledge Exchange event was organised to facilitate conversations across silos within this research ecosystem. Methods: A survey was sent to an initial list of stakeholders to identify potential discussion topics. Additional stakeholders were identified through iterative discussions on perspectives that needed representation. Co-design meetings with attendees were held to discuss the scope, format and ethos of the event. The event itself featured a cross-disciplinary selection of talks, a panel discussion, small-group discussions facilitated via a rolling seating plan and audience participation via Slido. A transcript was generated from the day, which, together with the output from Slido, provided a record of the day’s discussions. Finally, meetings were held following the event to identify the key challenges for digital endpoints which emerged and reflections and recommendations for dissemination. Results: Several challenges for digital endpoints were identified in the following areas: patient adherence and acceptability; algorithms and software for devices; design, analysis and conduct of clinical trials with digital endpoints; the environmental impact of digital endpoints; and the need for ongoing ethical support. Learnings taken for next generation events include the need to include additional stakeholder perspectives, such as those of funders and regulators, and the need for additional resources and facilitation to allow patient and public contributors to engage meaningfully during the event. Conclusions: The event emphasised the importance of consortium building and highlighted the critical role that collaborative, multi-disciplinary, and cross-sector efforts play in driving innovation in research design and strategic partnership building moving forward. This necessitates enhanced recognition by funders to support multi-stakeholder projects with patient involvement, standardised terminology, and the utilisation of open-source software.Peer reviewe
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
The interrelation of scientific, ethical, and translational challenges for precision medicine with multimodal biomarkers – A qualitative expert interview study in dermatology research
This qualitative study examines the impact of scientific, ethical, and translational challenges of precision medicine for atopic dermatitis and psoriasis. The study explores how these challenges affect biomarker research for inflammatory skin diseases as identified by stakeholders, including patient board representatives, pharmaceutical industry partners, and postdoctoral and senior researchers from multiple disciplines in biomarker research. We recruited participating experts both within and associated with the international Biomarkers in Atopic Dermatitis and Psoriasis (BIOMAP) consortium to ensure representation of the different organizational units of the consortium. For the study, we followed the COREQ checklist. The interviews were conducted using GDPR-safe online platforms and the pseudonymized transcripts were analyzed using Atlas.ti. We analyzed the interviews from participants' personal experiences, topic-oriented, and group specific to identify the main themes presented in this article. The findings were presented to peers and to the wider BIOMAP audience, discussed, and a draft was circulated within the consortium for feedback. In this study, we identify and discuss the interrelation of challenges that are relevant to improving precision medicine with multimodal biomarkers. We show how scientific challenges can interrelate with ethical and translational issues, and explain these interdependencies and articulate epistemic and social factors of interdisciplinary collaboration. Based on our findings, we suggest that including patient representatives’ perspectives is crucial for highly interrelated and widely diverse research. The proposed integrative perspective is beneficial for all involved stakeholders. Effective communication of science requires reflection on the tension between scientific uncertainty and the goals of precision medicine. Furthermore, we show how changing the perception of the diseases, atopic dermatitis, and psoriasis can benefit patients beyond medical practice
You Can’t Have AI Both Ways: Balancing Health Data Privacy and Access Fairly
Artificial intelligence (AI) in healthcare promises to make healthcare safer, more accurate, and more cost-effective. Public and private actors have been investing significant amounts of resources into the field. However, to benefit from data-intensive medicine, particularly from AI technologies, one must first and foremost have access to data. It has been previously argued that the conventionally used “consent or anonymize approach” undermines data-intensive medicine, and worse, may ultimately harm patients. Yet, this is still a dominant approach in European countries and framed as an either-or choice. In this paper, we contrast the different data governance approaches in the EU and their advantages and disadvantages in the context of healthcare AI. We detail the ethical trade-offs inherent to data-intensive medicine, particularly the balancing of data privacy and data access, and the subsequent prioritization between AI and other effective health interventions. If countries wish to allocate resources to AI, they also need to make corresponding efforts to improve (secure) data access. We conclude that it is unethical to invest significant amounts of public funds into AI development whilst at the same time limiting data access through strict privacy measures, as this constitutes a waste of public resources. The “AI revolution” in healthcare can only realise its full potential if a fair, inclusive engagement process spells out the values underlying (trans) national data governance policies and their impact on AI development, and priorities are set accordingly
Ethical layering in AI-driven polygenic risk scores—New complexities, new challenges
| openaire: EC/H2020/101016775/EU//INTERVENE Funding Information: This manuscript benefited from funding from INTERVENE (INTERnational consortium for integratiVE geNomics prEdiction), a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101016775. PM received funding from the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI, and grants 336033, 352986).Researchers aim to develop polygenic risk scores as a tool to prevent and more effectively treat serious diseases, disorders and conditions such as breast cancer, type 2 diabetes mellitus and coronary heart disease. Recently, machine learning techniques, in particular deep neural networks, have been increasingly developed to create polygenic risk scores using electronic health records as well as genomic and other health data. While the use of artificial intelligence for polygenic risk scores may enable greater accuracy, performance and prediction, it also presents a range of increasingly complex ethical challenges. The ethical and social issues of many polygenic risk score applications in medicine have been widely discussed. However, in the literature and in practice, the ethical implications of their confluence with the use of artificial intelligence have not yet been sufficiently considered. Based on a comprehensive review of the existing literature, we argue that this stands in need of urgent consideration for research and subsequent translation into the clinical setting. Considering the many ethical layers involved, we will first give a brief overview of the development of artificial intelligence-driven polygenic risk scores, associated ethical and social implications, challenges in artificial intelligence ethics, and finally, explore potential complexities of polygenic risk scores driven by artificial intelligence. We point out emerging complexity regarding fairness, challenges in building trust, explaining and understanding artificial intelligence and polygenic risk scores as well as regulatory uncertainties and further challenges. We strongly advocate taking a proactive approach to embedding ethics in research and implementation processes for polygenic risk scores driven by artificial intelligence.Peer reviewe
Anxiety at the first radiotherapy session for non-metastatic breast cancer: Key communication and communication-related predictors.
BACKGROUND AND PURPOSE: Patients may experience clinically relevant anxiety at their first radiotherapy (RT) sessions. To date, studies have not investigated during/around the RT simulation the key communication and communication-related predictors of this clinically relevant anxiety. MATERIAL AND METHODS: Breast cancer patients (n=227) completed visual analog scale (VAS) assessments of anxiety before and after their first RT sessions. Clinically relevant anxiety was defined as having pre- and post-first RT session VAS scores â©ľ4 cm. Communication during RT simulation was assessed with content analysis software (LaComm), and communication-related variables around the RT simulation were assessed with questionnaires. RESULTS: Clinically relevant anxiety at the first RT session was predicted by lower self-efficacy to communicate with the RT team (OR=0.65; p=0.020), the perception of lower support received from the RT team (OR=0.70; p=0.020), lower knowledge of RT-associated side effects (OR=0.95; p=0.057), and higher use of emotion-focused coping (OR=1.09; p=0.013). CONCLUSIONS: This study provides RT team members with information about potential communication strategies, which may be used to reduce patient anxiety at the first RT session