58 research outputs found
Requirements and expectations of high-quality biomarkers for atopic dermatitis and psoriasis in 2021-a two-round Delphi survey among international experts
Background Chronic inflammatory skin diseases such as atopic dermatitis (AD) and psoriasis (PSO) present major challenges in health care. Thus, biomarkers to identify disease trajectories and response to treatments to improve the lives of affected individuals warrant great research consideration. The requirements that these biomarkers must fulfil for use as practical clinical tools have not yet been adequately investigated. Aim To identify the core elements of high-quality AD and PSO biomarkers to prepare recommendations for current biomarker research. Method A cross-sectional two-round Delphi survey was conducted from August to October 2019 and October to November 2020. All participants were members of the BIOMAP project, an EU-funded consortium of clinicians, researchers, patient organizations and pharmaceutical industry partners. The first round consisted of three open-ended questions. Responses were qualitatively analysed, and 26 closed statements were developed. For the second round, 'agreement' was assumed when the responses of >= 70% of the participants were >= 5 points on a 7-point Likert scale for each statement. Priority classification was based on mean scores (60th percentile = high). Results Twenty-one and twenty-six individuals participated in rounds one and two, respectively. From 26 statements that were included in round 2, 18 achieved agreement (8 concerning the performance, 8 for the purpose and 2 on current obstacles). Seven statements were classified as high priority, e.g. those concerning reliability, clinical validity, a high positive predictive value, prediction of the therapeutic response and disease progression. Another seven statements were assigned medium priority, e.g. those about analytical validity, prediction of comorbidities and therapeutic algorithm. Low priority included four statements, like those concerning cost effectiveness and prediction of disease flares. Conclusion The core requirements that experts agreed on being essential for high-quality AD and PSO biomarkers require rapid validation. Biomarkers can therefore be assessed based on these prioritized requirements.Peer reviewe
Improving shared decision-making about cancer treatment through design-based data-driven decision-support tools and redesigning care paths:an overview of the 4D PICTURE project
Background: Patients with cancer often have to make complex decisions about treatment, with the options varying in risk profiles and effects on survival and quality of life. Moreover, inefficient care paths make it hard for patients to participate in shared decision-making. Data-driven decision-support tools have the potential to empower patients, support personalized care, improve health outcomes and promote health equity. However, decision-support tools currently seldom consider quality of life or individual preferences, and their use in clinical practice remains limited, partly because they are not well integrated in patients' care paths.Aim and objectives: The central aim of the 4D PICTURE project is to redesign patients' care paths and develop and integrate evidence-based decision-support tools to improve decision-making processes in cancer care delivery. This article presents an overview of this international, interdisciplinary project.Design, methods and analysis: In co-creation with patients and other stakeholders, we will develop data-driven decision-support tools for patients with breast cancer, prostate cancer and melanoma. We will support treatment decisions by using large, high-quality datasets with state-of-the-art prognostic algorithms. We will further develop a conversation tool, the Metaphor Menu, using text mining combined with citizen science techniques and linguistics, incorporating large datasets of patient experiences, values and preferences. We will further develop a promising methodology, MetroMapping, to redesign care paths. We will evaluate MetroMapping and these integrated decision-support tools, and ensure their sustainability using the Nonadoption, Abandonment, Scale-Up, Spread, and Sustainability (NASSS) framework. We will explore the generalizability of MetroMapping and the decision-support tools for other types of cancer and across other EU member states.Ethics: Through an embedded ethics approach, we will address social and ethical issues.Discussion: Improved care paths integrating comprehensive decision-support tools have the potential to empower patients, their significant others and healthcare providers in decision-making and improve outcomes. This project will strengthen health care at the system level by improving its resilience and efficiency.Improving the cancer patient journey and respecting personal preferences: an overview of the 4D PICTURE projectThe 4D PICTURE project aims to help cancer patients, their families and healthcare providers better undertstand their options. It supports their treatment and care choices, at each stage of disease, by drawing on large amounts of evidence from different types of European data. The project involves experts from many different specialist areas who are based in nine European countries. The overall aim is to improve the cancer patient journey and ensure personal preferences are respected
Challenging the Moral Status of Blood Donation
The World Health Organisation encourages that blood donation becomes voluntary and unremunerated, a system already operated in the UK. Drawing on public documents and videos, this paper argues that blood donation is regarded and presented as altruistic and supererogatory. In advertisements, donation is presented as something undertaken for the benefit of others, a matter attracting considerable gratitude from recipients and the collecting organisation. It is argued that regarding blood donation as an act of supererogation is wrongheaded, and an alternative account of blood donation as moral obligation is presented. Two arguments are offered in support of this position. First, the principle of beneficence, understood in a broad consequentialist framework obliges donation where the benefit to the recipient is large and the cost to the donor relatively small. This argument can be applied, with differing levels of normativity, to various acts of donation. Second, the wrongness of free riding requires individuals to contribute to collective systems from which they benefit. Alone and in combination these arguments present moral reasons for donation, recognised in communication strategies elsewhere. Research is required to evaluate the potential effects on donation of a campaign which presents blood donation as moral obligation, but of wider importance is the recognition that other-regarding considerations in relation to our own as well as others’ health result in a range not only of choices but also of obligations
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI
Neuroscience and social problems: the case of neuropunishment
Neuroscientific interventions are increasingly proposed as solutions for social problems, beyond their application in biomedicine. For example, there is increasing interest, particularly from outside commentators, in harnessing neuroscientific advances as an alternative method of punishing criminal offenders. Such neuropunishments are seen as a potentially more effective, less costly, and more humane alternative to incarceration, with overall better results for offender, communities, and societies. This article considers whether neuroscience as a field should engage more actively with such proposals, and whether more research should be done to explore the use of neurointerventions for punishment. It concludes that neuroscientists and those working at the intersection of neuroscience and the clinic should actively shape these debates
- …