16 research outputs found
Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol
INTRODUCTION: Standards for Reporting of Diagnostic Accuracy Study (STARD) was developed to improve the completeness and transparency of reporting in studies investigating diagnostic test accuracy. However, its current form, STARD 2015 does not address the issues and challenges raised by artificial intelligence (AI)-centred interventions. As such, we propose an AI-specific version of the STARD checklist (STARD-AI), which focuses on the reporting of AI diagnostic test accuracy studies. This paper describes the methods that will be used to develop STARD-AI. METHODS AND ANALYSIS: The development of the STARD-AI checklist can be distilled into six stages. (1) A project organisation phase has been undertaken, during which a Project Team and a Steering Committee were established; (2) An item generation process has been completed following a literature review, a patient and public involvement and engagement exercise and an online scoping survey of international experts; (3) A three-round modified Delphi consensus methodology is underway, which will culminate in a teleconference consensus meeting of experts; (4) Thereafter, the Project Team will draft the initial STARD-AI checklist and the accompanying documents; (5) A piloting phase among expert users will be undertaken to identify items which are either unclear or missing. This process, consisting of surveys and semistructured interviews, will contribute towards the explanation and elaboration document and (6) On finalisation of the manuscripts, the group's efforts turn towards an organised dissemination and implementation strategy to maximise end-user adoption. ETHICS AND DISSEMINATION: Ethical approval has been granted by the Joint Research Compliance Office at Imperial College London (reference number: 19IC5679). A dissemination strategy will be aimed towards five groups of stakeholders: (1) academia, (2) policy, (3) guidelines and regulation, (4) industry and (5) public and non-specific stakeholders. We anticipate that dissemination will take place in Q3 of 2021
STARD for Abstracts: Essential items for reporting diagnostic accuracy studies in journal or conference abstracts
Many abstracts of diagnostic accuracy studies are currently insufficiently informative. We extended the STARD (Standards for Reporting Diagnostic Accuracy) statement by developing a list of essential items that authors should consider when reporting diagnostic accuracy studies in journal or conference abstracts. After a literature review of published guidance for reporting biomedical studies, we identified 39 items potentially relevant to report in an abstract. We then selected essential items through a two round web based survey among the 85 members of the STARD Group, followed by discussions within an executive committee. Seventy three STARD Group members responded (86%), with 100% completion rate. STARD for Abstracts is a list of 11 quintessential items, to be reported in every abstract of a diagnostic accuracy study. We provide examples of complete reporting, and developed template text for writing informative abstracts
Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol
Introduction Standards for Reporting of Diagnostic Accuracy Study (STARD) was developed to improve the completeness and transparency of reporting in studies investigating diagnostic test accuracy. However, its current form, STARD 2015 does not address the issues and challenges raised by artificial intelligence (AI)-centred interventions. As such, we propose an AI-specific version of the STARD checklist (STARD-AI), which focuses on the reporting of AI diagnostic test accuracy studies. This paper describes the methods that will be used to develop STARD-AI. Methods and analysis The development of the STARD-AI checklist can be distilled into six stages. (1) A project organisation phase has been undertaken, during which a Project Team and a Steering Committee were established; (2) An item generation process has been completed following a literature review, a patient and public involvement and engagement exercise and an online scoping survey of international experts; (3) A three-round modified Delphi consensus methodology is underway, which will culminate in a teleconference consensus meeting of experts; (4) Thereafter, the Project Team will draft the initial STARD-AI checklist and the accompanying documents; (5) A piloting phase among expert users will be undertaken to identify items which are either unclear or missing. This process, consisting of surveys and semistructured interviews, will contribute towards the explanation and elaboration document and (6) On finalisation of the manuscripts, the group’s efforts turn towards an organised dissemination and implementation strategy to maximise end-user adoption. Ethics and dissemination Ethical approval has been granted by the Joint Research Compliance Office at Imperial College London (reference number: 19IC5679). A dissemination strategy will be aimed towards five groups of stakeholders: (1) academia, (2) policy, (3) guidelines and regulation, (4) industry and (5) public and non-specific stakeholders. We anticipate that dissemination will take place in Q3 of 2021
Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy
BACKGROUND: To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. OBJECTIVE: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy to allow readers to assess the potential for bias in the study and to evaluate its generalisability. METHODS: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. RESULTS: The search for published guidelines on diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. The consensus meeting shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. CONCLUSIONS: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the publi
Prospective validation study of transorbital Doppler ultrasound imaging for the detection of transient cerebral microemboli
STARD 2015 : An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies
Generation of data on within-subject biological variation in laboratory medicine: An update
In recent decades, the study of biological variation of laboratory analytes has received increased attention. The reasons for this interest are related to the potential practical applications of such knowledge. Biological variation data allow the derivation of important parameters for the interpretation and use of laboratory tests, such as the index of individuality for the evaluation of the utility of population reference intervals for the test interpretation, the estimate of significant change in a timed series of results of an individual, the number of
specimens required to obtain an accurate estimate of the homeostatic set point of the analyte and analytical performance specifications that assays should fulfill for their application in the clinical setting. It is, therefore, essential to experimentally derive biological variation information in an accurate and reliable way. Currently, a dated guideline for the biological variation data production and a more recent checklist to assist in the correct preparation of
publications related to biological variation studies are available. Here, we update and integrate, with examples, the available guideline for biological variation data production to help researchers to comply with the recommendations of the checklist for drafting manuscripts on biological variation. Particularly, we focus on the distribution of the data, an essential aspect to be considered for the derivation of biological variation data. Indeed, the difficulty in deriving reliable estimates of biological variation for those analytes, the measured concentrations of which are not normally distributed, is more and more evident
