6 research outputs found

    Artificial intelligence in digital pathology: a diagnostic test accuracy systematic review and meta-analysis

    Full text link
    Ensuring diagnostic performance of AI models before clinical use is key to the safe and successful adoption of these technologies. Studies reporting AI applied to digital pathology images for diagnostic purposes have rapidly increased in number in recent years. The aim of this work is to provide an overview of the diagnostic accuracy of AI in digital pathology images from all areas of pathology. This systematic review and meta-analysis included diagnostic accuracy studies using any type of artificial intelligence applied to whole slide images (WSIs) in any disease type. The reference standard was diagnosis through histopathological assessment and / or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. We identified 2976 studies, of which 100 were included in the review and 48 in the full meta-analysis. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model. 100 studies were identified for inclusion, equating to over 152,000 whole slide images (WSIs) and representing many disease types. Of these, 48 studies were included in the meta-analysis. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4) for AI. There was substantial heterogeneity in study design and all 100 studies identified for inclusion had at least one area at high or unclear risk of bias. This review provides a broad overview of AI performance across applications in whole slide imaging. However, there is huge variability in study design and available performance data, with details around the conduct of the study and make up of the datasets frequently missing. Overall, AI offers good accuracy when applied to WSIs but requires more rigorous evaluation of its performance.Comment: 26 pages, 5 figures, 8 tables + Supplementary material

    Evaluation of Polycyclic Aromatic Hydrocarbon Pollution From the HMS Royal Oak Shipwreck and Effects on Sediment Microbial Community Structure

    Get PDF
    Despite many shipwrecks containing oil there is a paucity of studies investigating their impact on surrounding environments. This study evaluates any potential effect the World War II shipwreck HMS Royal Oak is having on surrounding benthic sediments in Scapa Flow, Scotland. HMS (Her Majesty’s Ship) Royal Oak sank in 1939, subsequently leaked oil in the 1960s and 1990s, and is estimated to still hold 697 tonnes of fuel oil. In this study, sediments were analysed, over a 17.5 cm depth profile, along a 50–950 m cruciform transect away from the shipwreck. Analysis of polycyclic aromatic hydrocarbons (PAHs) revealed low concentrations (205.91 ± 50.15 μg kg‾¹ of dry sediment), which did not significantly differ with either distance from the shipwreck or sediment depth. PAH concentrations were well below the effects-range low (ERL) for the OSPAR (Oslo/Paris convention for the Protection of the Marine Environment of the North-East Atlantic) maritime area. The average Pyrogenic Index, in sediments around HMS Royal Oak, was 1.06 (±0.34), indicating PAHs were pyrogenic rather than petrogenic. Moreover, analysis of sediment microbiomes revealed no significant differences in bacterial community structure with distance from the shipwreck, with extremely low levels of obligate hydrocarbonoclastic bacteria (OHCB; 0.21% ± 0.54%). Both lines of evidence suggest that sampled sediments are not currently being impacted by petrogenic hydrocarbons and show no long-term impact by previous oil-spills from HMS Royal Oa

    Guidelines for clinical trials using artificial intelligence - SPIRIT-AI and CONSORT-AI(dagger)

    No full text
    The rapidly growing use of artificial intelligence in pathology presents a challenge in terms of study reporting and methodology. The existing guidelines for the design (SPIRIT) and reporting (CONSORT) of clinical trials have been extended with the aim of ensuring production of the highest quality evidence in this field. We explore these new guidelines and their relevance and application to pathology as a specialty. (c) 2020 The Authors. The Journal of Pathology published by John Wiley &amp; Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.Funding Agencies|Leeds Cares; National Pathology Imaging Co-operative (NPIC); Data to Early Diagnosis and Precision Medicine strand of the UK Governments Industrial Strategy Challenge Fund [104687]</p

    Reporting of Artificial Intelligence Diagnostic Accuracy Studies in Pathology Abstracts: Compliance with STARD for Abstracts Guidelines

    No full text
    Artificial intelligence (AI) research is transforming the range tools and technologies available to pathologists, leading to potentially faster, personalized and more accurate diagnoses for patients. However, to see the use of tools for patient benefit and achieve this safely, the implementation of any algorithm must be underpinned by high quality evidence from research that is understandable, replicable, usable and inclusive of details needed for critical appraisal of potential bias. Evidence suggests that reporting guidelines can improve the completeness of reporting of research, especially with good awareness of guidelines. The quality of evidence provided by abstracts alone is profoundly important, as they influence the decision of a researcher to read a paper, attend a conference presentation or include a study in a systematic review.AI abstracts at two international pathology conferences were assessed to establish completeness of reporting against the STARD for Abstracts criteria. This reporting guideline is for abstracts of diagnostic accuracy studies and includes a checklist of 11 essential items required to accomplish satisfactory reporting of such an investigation. A total of 3488 abstracts were screened from the United States &amp; Canadian Academy of Pathology annual meeting 2019 and the 31st European Congress of Pathology (ESP Congress). Of these, 51 AI diagnostic accuracy abstracts were identified and assessed against the STARD for Abstracts criteria for completeness of reporting. Completeness of reporting was suboptimal for the 11 essential criteria, a mean of 5.8 (SD 1.5) items were detailed per abstract. Inclusion was variable across the different checklist items, with all abstracts including study objectives and no abstracts including a registration number or registry. Greater use and awareness of the STARD for Abstracts criteria could improve completeness of reporting and further consideration is needed for areas where AI studies are vulnerable to bias

    Survey of liver pathologists to assess attitudes towards digital pathology and artificial intelligence

    No full text
    AimsA survey of members of the UK Liver Pathology Group (UKLPG) was conducted, comprising consultant histopathologists from across the UK who report liver specimens and participate in the UK National Liver Pathology External Quality Assurance scheme. The aim of this study was to understand attitudes and priorities of liver pathologists towards digital pathology and artificial intelligence (AI). MethodsThe survey was distributed to all full consultant members of the UKLPG via email. This comprised 50 questions, with 48 multiple choice questions and 2 free-text questions at the end, covering a range of topics and concepts pertaining to the use of digital pathology and AI in liver disease. ResultsForty-two consultant histopathologists completed the survey, representing 36% of fully registered members of the UKLPG (42/116). Questions examining digital pathology showed respondents agreed with the utility of digital pathology for primary diagnosis 83% (34/41), second opinions 90% (37/41), research 85% (35/41) and training and education 95% (39/41). Fatty liver diseases were an area of demand for AI tools with 80% in agreement (33/41), followed by neoplastic liver diseases with 59% in agreement (24/41). Participants were concerned about AI development without pathologist involvement 73% (30/41), however, 63% (26/41) disagreed when asked whether AI would replace pathologists. ConclusionsThis study outlines current interest, priorities for research and concerns around digital pathology and AI for liver pathologists. The majority of UK liver pathologists are in favour of the application of digital pathology and AI in clinical practice, research and education.Funding Agencies|National Pathology Imaging Co-operative (NPIC); Data to Early Diagnosis and Precision Medicine strand of the Governments Industrial Strategy Challenge Fund [104687]; Leeds Hospitals Charity; National Institute for Health Research (NIHR); NPIC; National Institute For Health Research (NIHR) UCLH/UCL Biomedical Research Centre (BRC)</p

    Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI

    No full text
    A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system's actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings
    corecore