64 research outputs found
Evidence-based practice educational intervention studies: A systematic review of what is taught and how it is measured
Abstract Background Despite the established interest in evidence-based practice (EBP) as a core competence for clinicians, evidence for how best to teach and evaluate EBP remains weak. We sought to systematically assess coverage of the five EBP steps, review the outcome domains measured, and assess the properties of the instruments used in studies evaluating EBP educational interventions. Methods We conducted a systematic review of controlled studies (i.e. studies with a separate control group) which had investigated the effect of EBP educational interventions. We used citation analysis technique and tracked the forward and backward citations of the index articles (i.e. the systematic reviews and primary studies included in an overview of the effect of EBP teaching) using Web of Science until May 2017. We extracted information on intervention content (grouped into the five EBP steps), and the outcome domains assessed. We also searched the literature for published reliability and validity data of the EBP instruments used. Results Of 1831 records identified, 302 full-text articles were screened, and 85 included. Of these, 46 (54%) studies were randomised trials, 51 (60%) included postgraduate level participants, and 63 (75%) taught medical professionals. EBP Step 3 (critical appraisal) was the most frequently taught step (63 studies; 74%). Only 10 (12%) of the studies taught content which addressed all five EBP steps. Of the 85 studies, 52 (61%) evaluated EBP skills, 39 (46%) knowledge, 35 (41%) attitudes, 19 (22%) behaviours, 15 (18%) self-efficacy, and 7 (8%) measured reactions to EBP teaching delivery. Of the 24 instruments used in the included studies, 6 were high-quality (achieved â„3 types of established validity evidence) and these were used in 14 (29%) of the 52 studies that measured EBP skills; 14 (41%) of the 39 studies that measured EBP knowledge; and 8 (26%) of the 35 studies that measured EBP attitude. Conclusions Most EBP educational interventions which have been evaluated in controlled studies focus on teaching only some of the EBP steps (predominantly critically appraisal of evidence) and did not use high-quality instruments to measure outcomes. Educational packages and instruments which address all EBP steps are needed to improve EBP teaching
Learning from crowds in digital pathology using scalable variational Gaussian processes
This work was supported by the Agencia Estatal de Investigacion of the Spanish Ministerio de Ciencia e Innovacion under contract PID2019-105142RB-C22/AEI/10.13039/501100011033, and the United States National Institutes of Health National Cancer Institute Grants U01CA220401 and U24CA19436201. P.M. contribution was mostly before joining Microsoft Research, when he was supported by La Caixa Banking Foundation (ID 100010434, Barcelona, Spain) through La Caixa Fellowship for Doctoral Studies LCF/BQ/ES17/11600011.The volume of labeled data is often the primary determinant of success in developing machine
learning algorithms. This has increased interest in methods for leveraging crowds to scale data
labeling efforts, and methods to learn from noisy crowd-sourced labels. The need to scale labeling is
acute but particularly challenging in medical applications like pathology, due to the expertise required
to generate quality labels and the limited availability of qualified experts. In this paper we investigate
the application of Scalable Variational Gaussian Processes for Crowdsourcing (SVGPCR) in digital
pathology. We compare SVGPCR with other crowdsourcing methods using a large multi-rater dataset
where pathologists, pathology residents, and medical students annotated tissue regions breast
cancer. Our study shows that SVGPCR is competitive with equivalent methods trained using goldstandard
pathologist generated labels, and that SVGPCR meets or exceeds the performance of other
crowdsourcing methods based on deep learning. We also show how SVGPCR can effectively learn
the class-conditional reliabilities of individual annotators and demonstrate that Gaussian-process
classifiers have comparable performance to similar deep learning methods. These results suggest
that SVGPCR can meaningfully engage non-experts in pathology labeling tasks, and that the classconditional
reliabilities estimated by SVGPCR may assist in matching annotators to tasks where they
perform well.Agencia Estatal de Investigacion of the Spanish Ministerio de Ciencia e Innovacion PID2019-105142RB-C22/AEI/10.13039/501100011033United States Department of Health & Human ServicesNational Institutes of Health (NIH) - USANIH National Cancer Institute (NCI) U01CA220401
U24CA19436201La Caixa Banking Foundation (Barcelona, Spain) Barcelona, Spain) through La Caixa Fellowship 100010434
LCF/BQ/ES17/1160001
Patients' satisfaction with information at discharge
Background: Adequate patient knowledge and engagement with their condition and its management can reduce re-hospitalisations and improve outcomes after acute admission for circulatory system disease. Aim: To evaluate the perceptions of cardio- or cerebrovascular patients of their satisfaction with discharge processes and to determine if this differs by demographic groups. Methods: A sample of 536 eligible public hospital inpatients was extracted from a consumer experience surveillance system. Questions relating to the discharge process were analysed using descriptive statistics to compare patient satisfaction levels against demographic variables. Results: Dissatisfaction rates were highest within the âWritten information providedâ (37.8%) and âDanger signals communicatedâ (34.7%) categories. Women and people aged â„80 were more likely to express dissatisfaction. Conclusion: Although respondents were largely satisfied, there are important differences in the characteristics of those that were dissatisfied. The communication of important discharge information to older people and women was less likely to meet their perceived needs
An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy
We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods
Establishing a library of resources to help people understand key concepts in assessing treatment claimsâThe âCritical thinking and Appraisal Resource Libraryâ (CARL)
Background
People are frequently confronted with untrustworthy claims about the effects of treatments. Uncritical acceptance of these claims can lead to poor, and sometimes dangerous, treatment decisions, and wasted time and money. Resources to help people learn to think critically about treatment claims are scarce, and they are widely scattered. Furthermore, very few learning-resources have been assessed to see if they improve knowledge and behavior.
Objectives
Our objectives were to develop the Critical thinking and Appraisal Resource Library (CARL). This library was to be in the form of a database containing learning resources for those who are responsible for encouraging critical thinking about treatment claims, and was to be made available online. We wished to include resources for groups we identified as âintermediariesâ of knowledge, i.e. teachers of schoolchildren, undergraduates and graduates, for example those teaching evidence-based medicine, or those communicating treatment claims to the public. In selecting resources, we wished to draw particular attention to those resources that had been formally evaluated, for example, by the creators of the resource or independent research groups.
Methods
CARL was populated with learning-resources identified from a variety of sourcesâtwo previously developed but unmaintained inventories; systematic reviews of learning-interventions; online and database searches; and recommendations by members of the project group and its advisors. The learning-resources in CARL were organised by âKey Conceptsâ needed to judge the trustworthiness of treatment claims, and were made available online by the James Lind Initiative in Testing Treatments interactive (TTi) English (www.testingtreatments.org/category/learning-resources).TTi English also incorporated the database of Key Concepts and the Claim Evaluation Tools developed through the Informed Healthcare Choices (IHC) project (informedhealthchoices.org).
Results
We have created a database of resources called CARL, which currently contains over 500 open-access learning-resources in a variety of formats: text, audio, video, webpages, cartoons, and lesson materials. These are aimed primarily at âIntermediariesâ, that is, âteachersâ, âcommunicatorsâ, âadvisorsâ, âresearchersâ, as well as for independent âlearnersâ. The resources included in CARL are currently accessible at www.testingtreatments.org/category/learning-resources
Conclusions
We hope that ready access to CARL will help to promote the critical thinking about treatment claims, needed to help improve healthcare choices
- âŠ