225 research outputs found

    Current Reporting Practices of ATLAS.ti User in Published Research Studies

    Get PDF
    Scholars investigating Computer-Assisted Qualitative Data Analysis Software (CAQDAS) have noted that we know relatively little about how researchers use packages such as ATLAS.ti in their practice. We report findings of a content analysis of 321 empirical articles, published between 1994 and 2013, on the use of data analysis software. The purpose of this analysis was to characterize both who is reporting the use of CAQDAS tools, and how they are reporting that use in their publications. Studies were analysed for subject discipline and researcher country of origin, overall methodological approach, and use of the software in different phases of the research process. We found that researchers were predominantly from the health sciences (69%) and published in health sciences journals (66%). Forty-eight percent of corresponding authors were from the United States, with 43 countries represented. Interview and focus groups were the most common data sources used; most studies did not identify a particular methodology beyond “qualitative”. Few studies (13%) provided any details on their use of ATLAS.ti beyond mentioning that it was used, and 97.5% of the articles used it only for data analysis. We encourage researchers to provide more detail as to their use of ATLAS.ti and explore the potential for ATLAS.ti to support aspects of their study beyond data analysis

    Errors in the administration of intravenous medications in hospital and the role of correct procedures and nurse experience

    Get PDF
    Background: Intravenous medication administrations have a high incidence of error but there is limited evidence of associated factors or error severity. Objective: To measure the frequency, type and severity of intravenous administration errors in hospitals and the associations between errors, procedural failures and nurse experience. Methods: Prospective observational study of 107 nurses preparing and administering 568 intravenous medications on six wards across two teaching hospitals. Procedural failures (eg, checking patient identification) and clinical intravenous errors (eg, wrong intravenous administration rate) were identified and categorised by severity. Results: Of 568 intravenous administrations, 69.7% (n=396; 95% CI 65.9 to 73.5) had at least one clinical error and 25.5% (95% CI 21.2 to 29.8) of these were serious. Four error types (wrong intravenous rate, mixture, volume, and drug incompatibility) accounted for 91.7% of errors. Wrong rate was the most frequent and accounted for 95 of 101 serious errors. Error rates and severity decreased with clinical experience. Each year of experience, up to 6 years, reduced the risk of error by 10.9% and serious error by 18.5%. Administration by bolus was associated with a 312% increased risk of error. Patient identification was only checked in 47.9% of administrations but was associated with a 56% reduction in intravenous error risk. Conclusions: Intravenous administrations have a higher risk and severity of error than other medication administrations. A significant proportion of errors suggest skill and knowledge deficiencies, with errors and severity reducing as clinical experience increases. A proportion of errors are also associated with routine violations which are likely to be learnt workplace behaviours. Both areas suggest specific targets for intervention.8 page(s

    Implementing Learning Design to support web-based learning

    Get PDF
    Preprint AusWeb04 Conference July Australia.In this paper we consider an initial implementation of a system for managing and using IMS Learning Design (LD) to represent online learning activities. LD has been suggested (Koper & Olivier, 2004) as a flexible way to represent and encode learning materials, especially suited to online and web-based learning while neutral to the pedagogy that is being applied. As such it offers a chance to address a gap in the preparation of learning materials and their eventual use by students by providing a formal description of the approach, roles and services needed for a particular unit of learning. The potential in learning design that most interests us is its scope for the exchange of validated and formalised designs and so encouraging reuse. Until full implementations exist this potential cannot be explored and it is hard to predict if learning design will provide value in describing either full courses or in describing isolated activities. The initial work is therefore to implement a system for managing, validating and inspecting learning design building on collaboration between the Institute of Educational Technology at the Open University UK (OUUK) and the Educational Technology Expertise Centre (OTEC) at the Open University of the Netherlands (OUNL), who produced a Learning Design Engine CopperCore (http://coppercore.org/) released under Open Source

    IntĂ©rĂȘt d’un score de la qualitĂ© de l'Ă©valuation pour l'apprentissage pour Ă©valuer la rĂ©troaction Ă©crite dans la formation postdoctorale en anesthĂ©siologie : Ă©tude de gĂ©nĂ©ralisabilitĂ© et de dĂ©cision

    Get PDF
    Background: Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods: Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≄17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall’s tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results: Both the faculty’s utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees’ utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions: The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.Contexte : La qualitĂ© de la rĂ©troaction Ă  la suite de l’évaluation d’activitĂ©s professionnelles confiables (APC) est d’une importance capitale dans les programmes de rĂ©sidence fondĂ©s sur les compĂ©tences. Le score QuAL (Quality of Assessment for Learning) est un outil dĂ©veloppĂ© pour Ă©valuer la qualitĂ© de la rĂ©troaction narrative dans les Ă©valuations en milieu de travail. Sa validitĂ© a Ă©tĂ© dĂ©montrĂ©e dans le cas des commentaires narratifs fournis aux rĂ©sidents en mĂ©decine d'urgence, mais sa fiabilitĂ© n’a pas Ă©tĂ© Ă©valuĂ©e dans d'autres programmes de formation postdoctorale. MĂ©thodes : Cinquante ensembles de commentaires portant sur des APC d'une seule annĂ©e universitaire dans notre programme postdoctoral en anesthĂ©siologie – un programme fondĂ© sur les compĂ©tences – ont Ă©tĂ© sĂ©lectionnĂ©s par Ă©chantillonnage stratifiĂ© selon des paramĂštres prĂ©Ă©tablis [par exemple, le sexe du rĂ©sident et son niveau de formation, le sexe de l'Ă©valuateur, le niveau de formation en CompĂ©tence par conception, et le nombre de mots (≄17 ou <17 mots)]. Deux membres du comitĂ© de compĂ©tence et deux Ă©tudiants en mĂ©decine ont Ă©valuĂ© la qualitĂ© de la rĂ©troaction narrative Ă  l'aide d'un score d'utilitĂ© et d'un score QuAL. Nous avons utilisĂ© le coefficient tau-b de Kendall pour comparer l'utilitĂ© perçue de la rĂ©troaction Ă©crite et sa qualitĂ© Ă©valuĂ©e Ă  l’aide du score QuAL. Les auteurs ont utilisĂ© des Ă©tudes de gĂ©nĂ©ralisabilitĂ© et de dĂ©cision pour estimer les coefficients de fiabilitĂ© et de gĂ©nĂ©ralisabilitĂ©. RĂ©sultats : Les scores d'utilitĂ© et les scores QuAL des enseignants (r = 0,646, p < 0,001) et ceux des Ă©tudiants (r = 0,667, p < 0,001) Ă©taient modĂ©rĂ©ment corrĂ©lĂ©s. Les rĂ©sultats des Ă©tudes de gĂ©nĂ©ralisabilitĂ© ont montrĂ© qu’avec deux Ă©valuateurs les scores d'utilitĂ© Ă©taient fiables tant pour les enseignants (Epsilon=0,87, Phi=0,86) que pour les Ă©tudiants (Epsilon=0,88, Phi=0,88). Conclusions : Le score QuAL est en corrĂ©lation avec l'utilitĂ© de la rĂ©troaction sur les APC en anesthĂ©siologie Ă©valuĂ©e par les enseignants et les Ă©tudiants. Les uns et les autres peuvent appliquer de maniĂšre fiable le score QuAL aux commentaires narratifs sur les APC en anesthĂ©siologie. Cet outil pourrait ĂȘtre utilisĂ© pour le perfectionnement professoral et l'Ă©valuation des programmes dans le cadre d’une formation mĂ©dicale fondĂ©e sur les compĂ©tences. D'autres programmes pourraient envisager de reproduire notre Ă©tude dans leur spĂ©cialitĂ©

    Preuve de la validitĂ© du score de la qualitĂ© de l’évaluation pour l’apprentissage : une mesure de qualitĂ© pour les commentaires des superviseurs dans la formation mĂ©dicale fondĂ©e sur les compĂ©tences

    Get PDF
    Background: Competency based medical education (CBME) relies on supervisor narrative comments contained within entrustable professional activities (EPA) for programmatic assessment, but the quality of these supervisor comments is unassessed. There is validity evidence supporting the QuAL (Quality of Assessment for Learning) score for rating the usefulness of short narrative comments in direct observation. Objective: We sought to establish validity evidence for the QuAL score to rate the quality of supervisor narrative comments contained within an EPA by surveying the key end-users of EPA narrative comments: residents, academic advisors, and competence committee members. Methods: In 2020, the authors randomly selected 52 de-identified narrative comments from two emergency medicine EPA databases using purposeful sampling. Six collaborators (two residents, two academic advisors, and two competence committee members) were recruited from each of four EM Residency Programs (Saskatchewan, McMaster, Ottawa, and Calgary) to rate these comments with a utility score and the QuAL score.  Correlation between utility and QuAL score were calculated using Pearson’s correlation coefficient. Sources of variance and reliability were calculated using a generalizability study. Results: All collaborators (n = 24) completed the full study.  The QuAL score had a high positive correlation with the utility score amongst the residents (r = 0.80) and academic advisors (r = 0.75) and a moderately high correlation amongst competence committee members (r = 0.68).  The generalizability study found that the major source of variance was the comment indicating the tool performs well across raters. Conclusion: The QuAL score may serve as an outcome measure for program evaluation of supervisors, and as a resource for faculty development.Contexte : Dans la formation mĂ©dicale fondĂ©e sur les compĂ©tences (FMFC), l’évaluation programmatique s’appuie sur les commentaires narratifs des superviseurs en lien avec les activitĂ©s professionnelles confiables (EPA). En revanche, la qualitĂ© de ces commentaires n’est pas Ă©valuĂ©e. Il existe des preuves de la validitĂ© du score QuAL (qualitĂ© de l’évaluation pour l’apprentissage, Quality of Assessment for Learning en anglais) pour l’évaluation de l’utilitĂ© des commentaires de rĂ©troaction courts lors de la supervision par observation directe. Objectif : Nous avons tentĂ© de dĂ©montrer la validitĂ© du score QuAL aux fins de l’évaluation de la qualitĂ© des commentaires narratifs des superviseurs pour une APC en interrogeant les principaux utilisateurs finaux des rĂ©troactions : les rĂ©sidents, les conseillers pĂ©dagogiques et les membres du comitĂ© de compĂ©tence. MĂ©thodes : En 2020, les auteurs ont sĂ©lectionnĂ© au hasard 52 commentaires narratifs anonymisĂ©s dans deux bases de donnĂ©es d’APC en mĂ©decine d’urgence au moyen d’un Ă©chantillonnage intentionnel. Six collaborateurs (deux rĂ©sidents, deux conseillers pĂ©dagogiques et deux membres de comitĂ©s de compĂ©tence) ont Ă©tĂ© recrutĂ©s dans chacun des quatre programmes de rĂ©sidence en mĂ©decine d’urgence (Saskatchewan, McMaster, Ottawa et Calgary) pour Ă©valuer ces commentaires Ă  l’aide d’un score d’utilitĂ© et du score QuAL.  La corrĂ©lation entre l’utilitĂ© et le score QuAL a Ă©tĂ© calculĂ©e Ă  l’aide du coefficient de corrĂ©lation de Pearson. Les sources de variance et la fiabilitĂ© ont Ă©tĂ© calculĂ©es Ă  l’aide d’une Ă©tude de gĂ©nĂ©ralisabilitĂ©. RĂ©sultats : Tous les collaborateurs (n=24) ont rĂ©alisĂ© l’étude complĂšte.  Le score QuAL prĂ©sentait une corrĂ©lation positive Ă©levĂ©e avec le score d’utilitĂ© parmi les rĂ©sidents (r=0,80) et les conseillers pĂ©dagogiques (r=0,75) et une corrĂ©lation modĂ©rĂ©ment Ă©levĂ©e parmi les membres du comitĂ© de compĂ©tence (r=0,68).  L’étude de gĂ©nĂ©ralisation a rĂ©vĂ©lĂ© que la principale source de variance Ă©tait le commentaire, ce qui indique que l’outil a fonctionnĂ© avec une efficacitĂ© Ă©gale pour tous les Ă©valuateurs. Conclusion : Le score QuAL peut servir de mesure des rĂ©sultats pour l’évaluation des superviseurs par les programmes, et de ressource pour le perfectionnement du corps professoral

    CrĂ©er un tableau de bord permettant de rĂ©pondre aux besoins des rĂ©sidents d’un programme de formation fondĂ© sur les compĂ©tences : projet de recherche basĂ© sur la conception

    Get PDF
    Background: Canadian specialty programs are implementing Competence By Design, a competency-based medical education (CBME) program which requires frequent assessments of entrustable professional activities. To be used for learning, the large amount of assessment data needs to be interpreted by residents, but little work has been done to determine how visualizing and interacting with this data can be supported. Within the University of Saskatchewan emergency medicine residency program, we sought to determine how our residents’ CBME assessment data should be presented to support their learning and to develop a dashboard that meets our residents’ needs. Methods: We utilized a design-based research process to identify and address resident needs surrounding the presentation of their assessment data. Data was collected within the emergency medicine residency program at the University of Saskatchewan via four resident focus groups held over 10 months. Focus group discussions were analyzed using a grounded theory approach to identify resident needs. This guided the development of a dashboard which contained elements (data, analytics, and visualizations) that support their interpretation of the data. The identified needs are described using quotes from the focus groups as well as visualizations of the dashboard elements. Results: Resident needs were classified under three themes: (1) Provide guidance through the assessment program, (2) Present workplace-based assessment data, and (3) Present other assessment data. Seventeen dashboard elements were designed to address these needs. Conclusions: Our design-based research process identified resident needs and developed dashboard elements to meet them. This work will inform the creation and evolution of CBME assessment dashboards designed to support resident learning.Contexte : Les programmes canadiens de spĂ©cialitĂ© sont Ă  implanter la compĂ©tence par conception (CPC), un programme d’éducation mĂ©dicale par compĂ©tences qui nĂ©cessite des Ă©valuations frĂ©quentes des activitĂ©s professionnelles confiables. Pour servir aux fins d’apprentissage, la grande quantitĂ© de donnĂ©es d’évaluation doit ĂȘtre interprĂ©tĂ©e par les rĂ©sidents, mais peu de travaux ont Ă©tĂ© rĂ©alisĂ©s pour dĂ©terminer comment la visualisation et l’interaction avec ces donnĂ©es peuvent ĂȘtre soutenues. Dans le cadre du programme de rĂ©sidence en mĂ©decine d’urgence de l’UniversitĂ© de Saskatchewan, nous avons cherchĂ© Ă  dĂ©terminer comment les donnĂ©es d’évaluation de la CPC de nos rĂ©sidents devraient ĂȘtre prĂ©sentĂ©es pour soutenir leur apprentissage et pour dĂ©velopper un tableau de bord qui rĂ©ponde aux besoins de nos rĂ©sidents. MĂ©thodologie : Nous avons utilisĂ© un processus de recherche orientĂ©e par la conception pour cerner les besoins des rĂ©sidents en lien avec la prĂ©sentation de leurs donnĂ©es d’évaluation. Les donnĂ©es ont Ă©tĂ© recueillies au cours du programme de rĂ©sidence en mĂ©decine d’urgence de l’UniversitĂ© de Saskatchewan grĂące Ă  quatre groupes de discussion de rĂ©sidents qui se sont tenus sur une pĂ©riode de 10 mois. Les groupes de discussion ont Ă©tĂ© analysĂ©s en utilisant l’approche de la thĂ©orisation ancrĂ©e (Grounded Theory) pour cerner les besoins des rĂ©sidents, pour guider le dĂ©veloppement d’un tableau de bord contenant des Ă©lĂ©ments (donnĂ©es, analyses et visualisations) qui soutiennent leur interprĂ©tation de leurs propres donnĂ©es. Les besoins identifiĂ©s sont dĂ©crits Ă  l’aide de citations des groupes de discussion ainsi que de visualisations des Ă©lĂ©ments du tableau de bord. RĂ©sultats : Les besoins des rĂ©sidents ont Ă©tĂ© classĂ©s sous trois thĂšmes : 1. ĂȘtre guidĂ©s quant au programme d'Ă©valuation, 2. prĂ©senter des donnĂ©es d’évaluation en milieu de travail, et 3. prĂ©senter d’autres donnĂ©es d’évaluation. Dix-sept Ă©lĂ©ments du tableau de bord ont Ă©tĂ© conçus pour rĂ©pondre Ă  ces besoins. Conclusions : Notre mĂ©thode de recherche orientĂ©e par conception a permis de cerner les besoins des rĂ©sidents et d’élaborer les Ă©lĂ©ments d’un tableau de bord pour y rĂ©pondre. Ce travail servira de base Ă  la crĂ©ation et Ă  l’évolution des tableaux de bord d’évaluation en CPC conçus pour soutenir l’apprentissage des rĂ©sidents

    An Ecological Study of Anterior Cruciate Ligament Reconstruction, Part 1:Clinical Tests Do Not Correlate With Return-to-Sport Outcomes

    Get PDF
    BACKGROUND: Additional high-quality prospective studies are needed to better define the objective criteria used in relation to return-to-sport decisions after synthetic (ligament advanced reinforcement system [LARS]) and autograft (hamstring tendon [2ST/2GR]) anterior cruciate ligament (ACL) reconstruction in active populations. PURPOSE: To prospectively investigate and describe the recovery of objective clinical outcomes after autograft (2ST/2GR) and synthetic (LARS) ACL reconstructions, as well as to investigate the relationship between these clinimetric test outcomes and return-to-sport activity (Tegner activity scale [TAS] score) at 12 and 24 months postoperatively. STUDY DESIGN: Case series; Level of evidence, 4. METHODS: A total of 64 patients who underwent ACL reconstruction (32 LARS, 32 2ST/2GR autograft) and 32 healthy reference participants were assessed for joint laxity (KT-1000 arthrometer), clinical outcome (2000 International Knee Documentation Committee [IKDC] knee examination), and activity (TAS score) preoperatively and at 12, 16, 20, and 24 weeks and 12 and 24 months postoperatively. RESULTS: There was no significant correlation observed between clinical results using the 2000 IKDC knee examination and TAS score at 24 months (r (s) = 0.188, P = .137), nor were results for side-to-side difference (r (s) = 0.030, P = .814) or absolute KT-1000 arthrometer laxity of the surgical leg at 24 months postoperatively (r (s) = 0.076, P = .553) correlated with return-to-sport activity. Nonetheless, return-to-sport rates within the surgical cohort were 81% at 12 months and 83% at 24 months, respectively. No statistically significant differences were observed between physiological laxity of the uninjured knee within the surgical group compared with healthy knees within the reference group (P = .522). CONCLUSION: The results indicate that although relatively high levels of return-to-sport outcomes were achieved at 24 months compared with those previously reported in the literature, correlations between objective clinical tests and return-to-sport outcomes may not occur. Clinical outcome measures may provide suitable baseline information; however, the results of this study suggest that clinicians may need to place greater emphasis on other outcome measures when seeking to objectively promote safe return to sport
    • 

    corecore