676 research outputs found

    Efficacy, utility, and validity in Computed Tomography head reporting by radiographers

    Get PDF
    Introduction: Demand for Computed Tomography (CT) head imaging has increased exponentially within the National Health Service (NHS) coinciding with a limited consultant radiologist workforce, resulting in time-critical CT reporting delays for patients. The safety and effectiveness of the NHS improvement initiative increasing reporting capacity with radiographers is not yet established. Aim: To establish the diagnostic accuracy (efficacy) of trained radiographers reporting CT head examinations; their role in the patient pathway (clinical utility); beneficial outcomes of radiographers’ reports (validity); and an economic assessment of the role. Methods: A literature review using validated critique frameworks assessing methodological quality (QUADAS-2, CASP, CHEERS) and reporting (STARD, StaRI) of radiographers reporting CT head examinations studies established the ‘knowledge gap’ in evidence and requirement for research rigour. A further literature review identified an efficacy framework to structure the pragmatic mixedmethod research strategy. Seven studies assessed diagnostic accuracy, radiographers’ roles within the NHS, and economic evaluation, against the same frameworks to demonstrate research rigour. Results: Radiographers trained to report CT head scans demonstrated an efficacy level (AUC 0.98) equivalent to consultant radiologists. Radiographers communicated actionable reports and advice to multidisciplinary teams aiding clinician’s decisions including medical interventions and surgical referral evidencing clinical utility. Cross-sectional surveys demonstrated radiographers’ scope of practice included all referral pathways of trauma, health screening, disease diagnosis, staging, and monitoring treatment, and patient groups. The role was cost-effective (up to £328,865 per annum, per radiographer) and contributed a cost-benefit, attesting to the validity of the role within the patient pathway and healthcare system. Conclusion: Novel findings evidence trained CT head reporting radiographers’ efficacy is equivalent to radiologists, with beneficial impact for service design and delivery of expanding the workforce safely to potentially reduce reporting delays. An emerging theme from the findings underscores the need for robust study design to generate translational evidence for clinical practice

    Pathologists' first opinions on barriers and facilitators of computational pathology adoption in oncological pathology: an international study.

    Get PDF
    Computational pathology (CPath) algorithms detect, segment or classify cancer in whole slide images, approaching or even exceeding the accuracy of pathologists. Challenges have to be overcome before these algorithms can be used in practice. We therefore aim to explore international perspectives on the future role of CPath in oncological pathology by focusing on opinions and first experiences regarding barriers and facilitators. We conducted an international explorative eSurvey and semi-structured interviews with pathologists utilizing an implementation framework to classify potential influencing factors. The eSurvey results showed remarkable variation in opinions regarding attitude, understandability and validation of CPath. Interview results showed that barriers focused on the quality of available evidence, while most facilitators concerned strengths of CPath. A lack of consensus was present for multiple factors, such as the determination of sufficient validation using CPath, the preferred function of CPath within the digital workflow and the timing of CPath introduction in pathology education. The diversity in opinions illustrates variety in influencing factors in CPath adoption. A next step would be to quantitatively determine important factors for adoption and initiate validation studies. Both should include clear case descriptions and be conducted among a more homogenous panel of pathologists based on sub specialization

    An evaluation of a checklist in Musculoskeletal (MSK) radiographic image interpretation when using Artificial Intelligence (AI)

    Get PDF
    Background: AI is being used increasingly in image interpretation tasks. There are challenges for its optimal use in reporting environments. Human reliance on technology and bias can cause decision errors. Trust issues exist amongst radiologists and radiographers in both over-reliance (automation bias) and reluctance in AI use for decision support. A checklist, used with the AI to mitigate against such biases, may optimise the use of AI technologies and promote good decision hygiene. Method: A checklist, to be used in image interpretation with AI assistance, was developed. Participants interpreted 20 examinations with AI assistance and then re- interpreted the 20 examinations with AI and a checklist. The MSK images were presented to radiographers as patient examinations to replicate the image interpretation task in clinical practice. Image diagnosis and confidence levels on the diagnosis provided were collected following each interpretation. The participant perception of the use of the checklist was investigated via a questionnaire.Results: Data collection and analysis are underway and will be completed at the European Congress of Radiology in Vienna, March 2023. The impact of the use of a checklist in image interpretation with AI will be evaluated. Changes in accuracy and confidence will be investigated and results will be presented. Participant feedback will be analysed to determine perceptions and impact of the checklist also. Conclusion: A novel checklist has been developed to aid the interpretation of images when using AI. The checklist has been tested for its use in assisting radiographers in MSK image interpretation when using AI.<br/

    The impact of AI on radiographic image reporting – perspectives of the UK reporting radiographer population

    Get PDF
    Background: It is predicted that medical imaging services will be greatly impacted by AI in the future. Developments in computer vision have allowed AI to be used for assisted reporting. Studies have investigated radiologists' opinions of AI for image interpretation (Huisman et al., 2019 a/b) but there remains a paucity of information in reporting radiographers' opinions on this topic.Method: A survey was developed by AI expert radiographers and promoted via LinkedIn/Twitter and professional networks for radiographers from all specialities in the UK. A sub analysis was performed for reporting radiographers only.Results: 411 responses were gathered to the full survey (Rainey et al., 2021) with 86 responses from reporting radiographers included in the data analysis. 10.5% of respondents were using AI tools? as part of their reporting role. 59.3% and 57% would not be confident in explaining an AI decision to other healthcare practitioners and 'patients and carers' respectively. 57% felt that an affirmation from AI would increase confidence in their diagnosis. Only 3.5% would not seek second opinion following disagreement from AI. A moderate level of trust in AI was reported: mean score = 5.28 (0 = no trust; 10 = absolute trust). 'Overall performance/accuracy of the system', 'visual explanation (heatmap/ROI)', 'Indication of the confidence of the system in its diagnosis' were suggested as measures to increase trust.Conclusion: AI may impact reporting professionals' confidence in their diagnoses. Respondents are not confident in explaining an AI decision to key stakeholders. UK radiographers do not yet fully trust AI. Improvements are suggested

    An evaluation of a training tool and study day in chest image interpretation

    Get PDF
    Background: With the use of expert consensus a digital tool was developed by the research team which proved useful when teaching radiographers how to interpret chest images. The training tool included A) a search strategy training tool and B) an educational tool to communicate the search strategies using eye tracking technology. This training tool has the potential to improve interpretation skills for other healthcare professionals.Methods: To investigate this, 31 healthcare professionals i.e. nurses and physiotherapists, were recruited and participants were randomised to receive access to the training tool (intervention group) or not to have access to the training tool (control group) for a period of 4-6 weeks. Participants were asked to interpret different sets of 20 chest images before and after the intervention period. A study day was then provided to all participants following which participants were again asked to interpret a different set of 20 chest images (n=1860). Each participant was asked to complete a questionnaire on their perceptions of the training provided. Results: Data analysis is in progress. 50% of participants did not have experience in image interpretation prior to the study. The study day and training tool were useful in improving image interpretation skills. Participants perception of the usefulness of the tool to aid image interpretation skills varied among respondents.Conclusion: This training tool has the potential to improve patient diagnosis and reduce healthcare costs

    Translation of quantitative MRI analysis tools for clinical neuroradiology application

    Get PDF
    Quantification of imaging features can assist radiologists by reducing subjectivity, aiding detection of subtle pathology, and increasing reporting consistency. Translation of quantitative image analysis techniques to clinical use is currently uncommon and challenging. This thesis explores translation of quantitative imaging support tools for clinical neuroradiology use. I have proposed a translational framework for development of quantitative imaging tools, using dementia as an exemplar application. This framework emphasises the importance of clinical validation, which is not currently prioritised. Aspects of the framework were then applied to four disease areas: hippocampal sclerosis (HS) as a cause of epilepsy; dementia; multiple sclerosis (MS) and gliomas. A clinical validation study for an HS quantitative report showed that when image interpreters used the report, they were more accurate and confident in their assessments, particularly for challenging bilateral cases. A similar clinical validation study for a dementia reporting tool found improved sensitivity for all image interpreters and increased assessment accuracy for consultant radiologists. These studies indicated benefits from quantitative reports that contextualise a patient’s results with appropriate normative reference data. For MS, I addressed a technical translational challenge by applying lesion and brain quantification tools to standard clinical image acquisitions which do not include a conventional T1-weighted sequence. Results were consistent with those from conventional sequence inputs and therefore I pursued this concept to establish a clinically applicable normative reference dataset for development of a quantitative reporting tool for clinical use. I focused on current radiology reporting of gliomas to establish which features are commonly missed and may be important for clinical management decisions. This informs both the potential utility of a quantitative report for gliomas and its design and content. I have identified numerous translational challenges for quantitative reporting and explored aspects of how to address these for several applications across clinical neuroradiology

    An investigation into error detection and recovery in UK National Health Service screening programmes

    Get PDF
    The purpose of this thesis is to gain an understanding of the problems that may impede detection and recovery of NHS laboratory screening errors. This is done by developing an accident analysis technique that isolates and further analyzes error handling activities, and applying it in four case studies; four recent incidents where laboratory errors in NHS screening programmes resulted in multiple misdiagnoses over months or even years. These errors resulted in false yet plausible test results, thus being masked and almost impossible to detect in isolated cases. This technique is based on a theoretical framework that draws upon cognitive science and systems engineering, in order to explore the impact of plausibility on the entire process of error recovery. The four analyses are then integrated and compared, in order to produce a set of conclusions and recommendations. The main output of this work is the “Screening Error Recovery Model”; a model which captures and illustrates the different kinds of activities that took place during the organizational incident response of these four incidents. The model can be used to analyze and design error recovery procedures in complex, inter-organizational settings, such as the NHS, and its Primary/Secondary care structure
    • 

    corecore