16 research outputs found
Which computable biomedical knowledge objects will be regulated? Results of a UK workshop discussing the regulation of knowledge libraries and software as a medical device
Introduction: To understand when knowledge objects in a computable biomedical knowledge library are likely to be subject to regulation as a medical device in the United Kingdom. Methods: A briefing paper was circulated to a multi‐disciplinary group of 25 including regulators, lawyers and others with insights into device regulation. A 1‐day workshop was convened to discuss questions relating to our aim. A discussion paper was drafted by lead authors and circulated to other authors for their comments and contributions. Results: This article reports on those deliberations and describes how UK device regulators are likely to treat the different kinds of knowledge objects that may be stored in computable biomedical knowledge libraries. While our focus is the likely approach of UK regulators, our analogies and analysis will also be relevant to the approaches taken by regulators elsewhere. We include a table examining the implications for each of the four knowledge levels described by Boxwala in 2011 and propose an additional level. Conclusions: If a knowledge object is described as directly executable for a medical purpose to provide decision support, it will generally be in scope of UK regulation as “software as a medical device.” However, if the knowledge object consists of an algorithm, a ruleset, pseudocode or some other representation that is not directly executable and whose developers make no claim that it can be used for a medical purpose, it is not likely to be subject to regulation. We expect similar reasoning to be applied by regulators in other countries
Publisher Correction: Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI (Nature Medicine, (2022), 28, 5, (924-933), 10.1038/s41591-022-01772-9)
In the version of this article initially published, a list of the DECIDE-AI expert group members and their affiliations was omitted and has now been included in the HTML and PDF versions of the article. *A list of authors and their affiliations appears online
The Political Economy of European Wine Regulations
The EU wine market is heavily regulated. Despite the many distortions in the wine market as a consequence, refirming the regulations has proven difficult. This paper analyses the political economy mechanism that created the existing set of wine regulations. We document the historical origins of the regulations and relate these to political pressures that resulted from international integration, technological innovations and economic developments
Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico, evaluation, but few have yet demonstrated real benefit to patient care. Early stage clinical evaluation is important to assess an AI system’s actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use, and pave the way to further large scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multistakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two round, modified Delphi process to collect and analyse expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 predefined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. 123 experts participated in the first round of Delphi, 138 in the second, 16 in the consensus meeting, and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI specific reporting items (made of 28 subitems) and 10 generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we have developed a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings
Revealing transparency gaps in publicly available Covid-19 datasets used for medical artificial intelligence development:a systematic review
Background: Throughout the Covid-19 pandemic artificial intelligence (AI) models were developed in response to significant resource constraints affecting healthcare systems. Previous systematic reviews demonstrate that healthcare datasets often have significant limitations, contributing to bias in any AI health technologies they are used to develop. This systematic review aimed to characterise the composition and reporting of datasets created throughout the Covid-19 pandemic, and highlight key deficiencies which could affect downstream AI models.Methods: A systematic search of MEDLINE identified articles describing datasets used for AI health technology development. Studies were screened for eligibility, and datasets collated for analysis. Google Dataset Search was used to identify additional datasets. After deduplication and exclusion of datasets not related to Covid-19 or those not containing data relating to individual humans, dataset documentation was assessed for the completeness of metadata reporting, their composition, the means of data access and any restrictions, ethical considerations, and other factors.Findings: 192 datasets were analysed. Metadata were often incomplete or absent. Only 48% of datasets’ documentation described the country where data originated, 43% reported the age of individuals included, and under 25% reported sex, gender, race, ethnicity or any other attributes. Most datasets provided no information on data labelling, ethical review, or consent for data sharing. Many datasets reproduced data from other datasets, sometimes without linking to the original source. We found multiple cases where paediatric chest X-ray images from prior to the Covid-19 pandemic were reproduced in datasets without this being acknowledged. Interpretation: This review highlights substantial deficiencies in the documentation of many Covid-19 datasets. It is imperative to balance data availability with data quality in future health emergencies, or else we risk developing biased AI health technologies which do more harm than good.Funding: This review was funded by The NHS AI Lab and The Health Foundation, and supported by the National Institute for Health and Care Research (AI_HI200014).<br/
Recommended from our members
Revealing transparency gaps in publicly available Covid-19 datasets used for medical artificial intelligence development - a systematic review
Background:
Throughout the Covid-19 pandemic artificial intelligence (AI) models were developed in response to significant resource constraints affecting healthcare systems. Previous systematic reviews demonstrate that healthcare datasets often have significant limitations, contributing to bias in any AI health technologies they are used to develop. This systematic review aimed to characterise the composition and reporting of datasets created throughout the Covid-19 pandemic, and highlight key deficiencies which could affect downstream AI models.
Methods:
A systematic search of MEDLINE identified articles describing datasets used for AI health technology development. Studies were screened for eligibility, and datasets collated for analysis. Google Dataset Search was used to identify additional datasets. After deduplication and exclusion of datasets not related to Covid-19 or those not containing data relating to individual humans, dataset documentation was assessed for the completeness of metadata reporting, their composition, the means of data access and any restrictions, ethical considerations, and other factors.
Findings:
192 datasets were analysed. Metadata were often incomplete or absent. Only 48% of datasets’ documentation described the country where data originated, 43% reported the age of individuals included, and under 25% reported sex, gender, race, ethnicity or any other attributes. Most datasets provided no information on data labelling, ethical review, or consent for data sharing. Many datasets reproduced data from other datasets, sometimes without linking to the original source. We found multiple cases where paediatric chest X-ray images from prior to the Covid-19 pandemic were reproduced in datasets without this being acknowledged.
Interpretation:
This review highlights substantial deficiencies in the documentation of many Covid-19 datasets. It is imperative to balance data availability with data quality in future health emergencies, or else we risk developing biased AI health technologies which do more harm than good.
Funding:
This review was funded by The NHS AI Lab and The Health Foundation, and supported by the National Institute for Health and Care Research (AI_HI200014)
Revealing transparency gaps in publicly available Covid-19 datasets used for medical artificial intelligence development:a systematic review
Background: Throughout the Covid-19 pandemic artificial intelligence (AI) models were developed in response to significant resource constraints affecting healthcare systems. Previous systematic reviews demonstrate that healthcare datasets often have significant limitations, contributing to bias in any AI health technologies they are used to develop. This systematic review aimed to characterise the composition and reporting of datasets created throughout the Covid-19 pandemic, and highlight key deficiencies which could affect downstream AI models.Methods: A systematic search of MEDLINE identified articles describing datasets used for AI health technology development. Studies were screened for eligibility, and datasets collated for analysis. Google Dataset Search was used to identify additional datasets. After deduplication and exclusion of datasets not related to Covid-19 or those not containing data relating to individual humans, dataset documentation was assessed for the completeness of metadata reporting, their composition, the means of data access and any restrictions, ethical considerations, and other factors.Findings: 192 datasets were analysed. Metadata were often incomplete or absent. Only 48% of datasets’ documentation described the country where data originated, 43% reported the age of individuals included, and under 25% reported sex, gender, race, ethnicity or any other attributes. Most datasets provided no information on data labelling, ethical review, or consent for data sharing. Many datasets reproduced data from other datasets, sometimes without linking to the original source. We found multiple cases where paediatric chest X-ray images from prior to the Covid-19 pandemic were reproduced in datasets without this being acknowledged. Interpretation: This review highlights substantial deficiencies in the documentation of many Covid-19 datasets. It is imperative to balance data availability with data quality in future health emergencies, or else we risk developing biased AI health technologies which do more harm than good.Funding: This review was funded by The NHS AI Lab and The Health Foundation, and supported by the National Institute for Health and Care Research (AI_HI200014).<br/
Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings