70 research outputs found

    International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) - the propagation of knowledge in ultrasound for the improvement of OB/GYN care worldwide: experience of basic ultrasound training in Oman.

    Get PDF
    BACKGROUND: The aim of this study is to evaluate effectiveness of a new ISUOG (International Society of Ultrasound in Obstetrics and Gynecology) Outreach Teaching and Training Program delivered in Muscat, Oman. METHODS: Quantitative assessments to evaluate knowledge and practical skills were administered before and after an ultrasound course for sonologists attending the ISUOG Outreach Course, which took place in November, 2017, in Oman. Trainees were selected from each region of the country following a national vetting process conducted by the Oman Ministry of Health. Twenty-eight of the participants were included in the analysis. Pre- and post-training practical and theoretical scores were evaluated and compared. RESULTS: Participants achieved statistically significant improvements, on average by 47% (p < 0.001), in both theoretical knowledge and practical skills. Specifically, the mean score in the theoretical knowledge test significantly increased from 55.6% (± 14.0%) to 81.6% (± 8.2%), while in the practical test, the mean score increased from 44.6% (± 19.5%) to 65.7% (± 23.0%) (p < 0.001). Performance was improved post-course among 27/28 participants (96.4%) in the theoretical test (range: 14 to 200%) and among 24/28 (85.7%) trainees in the practical skills test (range: 5 to 217%). CONCLUSION: Application of the ISUOG Basic Training Curriculum and Outreach Teaching and Training Course improved the theoretical knowledge and practical skills of local health personnel. Long-term re-evaluation is, however, considered imperative to ascertain and ensure knowledge retention

    Simulation-based assessment of upper abdominal ultrasound skills

    Get PDF
    Background: Ultrasound is a safe and effective diagnostic tool used within several specialties. However, the quality of ultrasound scans relies on sufficiently skilled clinician operators. The aim of this study was to explore the validity of automated assessments of upper abdominal ultrasound skills using an ultrasound simulator. Methods: Twenty five novices and five experts were recruited, all of whom completed an assessment program for the evaluation of upper abdominal ultrasound skills on a virtual reality simulator. The program included five modules that assessed different organ systems using automated simulator metrics. We used Messick’s framework to explore the validity evidence of these simulator metrics to determine the contents of a final simulator test. We used the contrasting groups method to establish a pass/fail level for the final simulator test. Results: Thirty seven out of 60 metrics were able to discriminate between novices and experts (p &lt; 0.05). The median simulator score of the final simulator test including the metrics with validity evidence was 26.68% (range: 8.1–40.5%) for novices and 85.1% (range: 56.8–91.9%) for experts. The internal structure was assessed by Cronbach alpha (0.93) and intraclass correlation coefficient (0.89). The pass/fail level was determined to be 50.9%. This pass/fail criterion found no passing novices or failing experts. Conclusions: This study collected validity evidence for simulation-based assessment of upper abdominal ultrasound examinations, which is the first step toward competency-based training. Future studies may examine how competency-based training in the simulated setting translates into improvements in clinical performances.</p

    2021 international consensus statement on optical coherence tomography for basal cell carcinoma: image characteristics, terminology and educational needs

    Get PDF
    Background Despite the widespread use of optical coherence tomography (OCT) for imaging of keratinocyte carcinoma, we lack an expert consensus on the characteristic OCT features of basal cell carcinoma (BCC), an internationally vetted set of OCT terms to describe various BCC subtypes, and an educational needs assessment. Objectives To identify relevant BCC features in OCT images, propose terminology based on inputs from an expert panel and identify content for a BCC-specific curriculum for OCT trainees. Methods Over three rounds, we conducted a Delphi consensus study on BCC features and terminology between March and September 2020. In the first round, experts were asked to propose BCC subtypes discriminable by OCT, provide OCT image features for each proposed BCC subtypes and suggest content for a BCC-specific OCT training curriculum. If agreement on a BCC-OCT feature exceeded 67%, the feature was accepted and included in a final review. In the second round, experts had to re-evaluate features with less than 67% agreement and rank the ten most relevant BCC OCT image features for superficial BCC, nodular BCC and infiltrative and morpheaphorm BCC subtypes. In the final round, experts received the OCT-BCC consensus list for a final review, comments and confirmation. Results The Delphi included six key opinion leaders and 22 experts. Consensus was found on terminology for three OCT BCC image features: (i) hyporeflective areas, (ii) hyperreflective areas and (iii) ovoid structures. Further, the participants ranked the ten most relevant image features for nodular, superficial, infiltrative and morpheaform BCC. The target group and the key components for a curriculum for OCT imaging of BCC have been defined. Conclusion We have established a set of OCT image features for BCC and preferred terminology. A comprehensive curriculum based on the expert suggestions will help implement OCT imaging of BCC in clinical and research settings

    Generalisability of deep learning models in low-resource imaging settings: A fetal ultrasound study in 5 African countries

    Full text link
    Most artificial intelligence (AI) research have concentrated in high-income countries, where imaging data, IT infrastructures and clinical expertise are plentiful. However, slower progress has been made in limited-resource environments where medical imaging is needed. For example, in Sub-Saharan Africa the rate of perinatal mortality is very high due to limited access to antenatal screening. In these countries, AI models could be implemented to help clinicians acquire fetal ultrasound planes for diagnosis of fetal abnormalities. So far, deep learning models have been proposed to identify standard fetal planes, but there is no evidence of their ability to generalise in centres with limited access to high-end ultrasound equipment and data. This work investigates different strategies to reduce the domain-shift effect for a fetal plane classification model trained on a high-resource clinical centre and transferred to a new low-resource centre. To that end, a classifier trained with 1,792 patients from Spain is first evaluated on a new centre in Denmark in optimal conditions with 1,008 patients and is later optimised to reach the same performance in five African centres (Egypt, Algeria, Uganda, Ghana and Malawi) with 25 patients each. The results show that a transfer learning approach can be a solution to integrate small-size African samples with existing large-scale databases in developed countries. In particular, the model can be re-aligned and optimised to boost the performance on African populations by increasing the recall to 0.92±0.040.92 \pm 0.04 and at the same time maintaining a high precision across centres. This framework shows promise for building new AI models generalisable across clinical centres with limited data acquired in challenging and heterogeneous conditions and calls for further research to develop new solutions for usability of AI in countries with less resources

    Involvement in teaching improves learning in medical students: a randomized cross-over study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Peer-assisted learning has many purported benefits including preparing students as educators, improving communication skills and reducing faculty teaching burden. But comparatively little is known about the effects of teaching on learning outcomes of peer educators in medical education.</p> <p>Methods</p> <p>One hundred and thirty-five first year medical students were randomly allocated to 11 small groups for the Gastroenterology/Hematology Course at the University of Calgary. For each of 22 sessions, two students were randomly selected from each group to be peer educators. Students were surveyed to estimate time spent preparing as peer educator versus group member. Students completed an end-of-course 94 question multiple choice exam. A paired t-test was used to compare performance on clinical presentations for which students were peer educators to those for which they were not.</p> <p>Results</p> <p>Preparation time increased from a mean (SD) of 36 (33) minutes baseline to 99 (60) minutes when peer educators (Cohen's <it>d </it>= 1.3; p < 0.001). The mean score (SD) for clinical presentations in which students were peer educators was 80.7% (11.8) compared to77.6% (6.9) for those which they were not (<it>d </it>= 0.33; <it>p </it>< 0.01).</p> <p>Conclusion</p> <p>Our results suggest that involvement in teaching small group sessions improves medical students' knowledge acquisition and retention.</p

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
    • …
    corecore