10 research outputs found

    When Less is More: Validating a Brief Scale to Rate Interprofessional Team Competencies

    Get PDF
    Background: There is a need for validated and easy-to-apply behavior-based tools for assessing interprofessional team competencies in clinical settings. The seven-item observerbased Modified McMaster-Ottawa scale was developed for the Team Objective Structured Clinical Encounter (TOSCE) to assess individual and team performance in interprofessional patient encounters. Objective: We aimed to improve scale usability for clinical settings by reducing item numbers while maintaining generalizability; and to explore the minimum number of observed cases required to achieve modest generalizability for giving feedback. Design: We administered a two-station TOSCE in April 2016 to 63 students split into 16 newly-formed teams, each consisting of four professions. The stations were of similar difficulty. We trained sixteen faculty to rate two teams each. We examined individual and team performance scores using generalizability (G) theory and principal component analysis (PCA). Results: The seven-item scale shows modest generalizability (.75) with individual scores. PCA revealed multicollinearity and singularity among scale items and we identified three potential items for removal. Reducing items for individual scores from seven to four (measuring Collaboration, Roles, Patient/Family-centeredness, and Conflict Management) changed scale generalizability from .75 to .73. Performance assessment with two cases is associated with reasonable generalizability (.73). Students in newly-formed interprofessional teams show a learning curve after one patient encounter. Team scores from a two-station TOSCE demonstrate low generalizability whether the scale consisted of four (.53) or seven items (.55). Conclusion: The four-item Modified McMaster-Ottawa scale for assessing individual performance in interprofessional teams retains the generalizability and validity of the seven-item scale. Observation of students in teams interacting with two different patients provides reasonably reliable ratings for giving feedback. The four-item scale has potential for assessing individual student skills and the impact of IPE curricula in clinical practice settings

    Working with interpreters: how student behavior affects quality of patient interaction when using interpreters

    Get PDF
    Background: Despite the prevalence of medical interpreting in the clinical environment, few medical professionals receive training in best practices when using an interpreter. We designed and implemented an educational workshop on using interpreters as part of the cultural competency curriculum for second year medical students (MSIIs) at David Geffen School of Medicine at UCLA. The purpose of this study is two-fold: first, to evaluate the effectiveness of the workshop and second, if deficiencies are found, to investigate whether the deficiencies affected the quality of the patient encounter when using an interpreter. Methods: A total of 152 MSIIs completed the 3-hour workshop and a 1-station objective-structured clinical examination, 8 weeks later to assess skills. Descriptive statistics and independent sample t-tests were used to assess workshop effectiveness. Results: Based on a passing score of 70%, 39.4% of the class failed. Two skills seemed particularly problematic: assuring confidentiality (missed by 50%) and positioning the interpreter (missed by 70%). While addressing confidentiality did not have a significant impact on standardized patient satisfaction, interpreter position did. Conclusion: Instructing the interpreter to sit behind the patient helps sustain eye contact between clinician and patient, while assuring confidentiality is a tenet of quality clinical encounters. Teaching students and faculty to emphasize both is warranted to improve cross-language clinical encounters

    Accuracy of Professional Self-Reports: Medical Student Self-Report and the Scoring of Professional Competence

    No full text
    Self-report is currently used as an indicator of professional practice in a variety of fields, including medicine and education. Important to consider, therefore, is the ability of self-report to accurately capture professional practice. This study investigated how well professionals' self-reports of behavior agreed with an expert observer's reports of those same behaviors. While this study explored self-report in the context of medical professionals, this topic is equally important to the measurement of teacher practices.This study investigated agreement between: 1) medical student self-report and expert rater documentation of a clinical encounter; and 2) standardized patient (an actor highly-trained to portray a patient) and expert rater documentation of medical student performance. Additionally, this study investigated whether levels of agreement depended on the context and content of behaviors, features of the examination, or characteristics of the professional.Performance data were analyzed from a stratified random sample of 75 fourth-year medical students who completed a clinical competence examination in 2012. Students rotated through a series of 15-minute encounters, called stations, interviewing a standardized patient in each. Medical students were instructed to: 1) obtain the patient's history; 2) conduct a physical examination; and 3) discuss potential diagnoses. Ratings of student performance were collected from the medical student self-reports, the SP checklists, and the expert rater's documentation of the encounters. Analyses focused on the 4-7 behavioral items in each of the three stations studied that were considered critical to patient care.Comparison of the three sources of ratings revealed marked differences. Most importantly, medical students' self-reports did not agree highly with the expert's reports. Medical students both under-reported and over-reported a substantial number of critical action items with level of agreement varying by station and the nature of the behavior. Due to medical student tendency to under-report behaviors, use of self-report to score performance would result in a large number of students falsely identified as failing the examination.This study discusses causes of medical student under- and over-report and recommends strategies for improvement. The study also addresses implications of findings for the use of self-report among teachers, citing specific examples

    æRESEARCH ARTICLE

    No full text
    student behavior affects quality of patient interaction when using interpreter

    Adapting the McMaster-Ottawa scale and developing behavioral anchors for assessing performance in an interprofessional Team Observed Structured Clinical Encounter

    No full text
    Background: Current scales for interprofessional team performance do not provide adequate behavioral anchors for performance evaluation. The Team Observed Structured Clinical Encounter (TOSCE) provides an opportunity to adapt and develop an existing scale for this purpose. We aimed to test the feasibility of using a retooled scale to rate performance in a standardized patient encounter and to assess faculty ability to accurately rate both individual students and teams. Methods: The 9-point McMaster-Ottawa Scale developed for a TOSCE was converted to a 3-point scale with behavioral anchors. Students from four professions were trained a priori to perform in teams of four at three different levels as individuals and teams. Blinded faculty raters were trained to use the scale to evaluate individual and team performances. G-theory was used to analyze ability of faculty to accurately rate individual students and teams using the retooled scale. Results: Sixteen faculty, in groups of four, rated four student teams, each participating in the same TOSCE station. Faculty expressed comfort rating up to four students in a team within a 35-min timeframe. Accuracy of faculty raters varied (38–81% individuals, 50–100% teams), with errors in the direction of over-rating individual, but not team performance. There was no consistent pattern of error for raters. Conclusion: The TOSCE can be administered as an evaluation method for interprofessional teams. However, faculty demonstrate a ‘leniency error’ in rating students, even with prior training using behavioral anchors. To improve consistency, we recommend two trained faculty raters per station

    When less is more: validating a brief scale to rate interprofessional team competencies

    Get PDF
    Background: There is a need for validated and easy-to-apply behavior-based tools for assessing interprofessional team competencies in clinical settings. The seven-item observer-based Modified McMaster-Ottawa scale was developed for the Team Objective Structured Clinical Encounter (TOSCE) to assess individual and team performance in interprofessional patient encounters. Objective: We aimed to improve scale usability for clinical settings by reducing item numbers while maintaining generalizability; and to explore the minimum number of observed cases required to achieve modest generalizability for giving feedback. Design: We administered a two-station TOSCE in April 2016 to 63 students split into 16 newly-formed teams, each consisting of four professions. The stations were of similar difficulty. We trained sixteen faculty to rate two teams each. We examined individual and team performance scores using generalizability (G) theory and principal component analysis (PCA). Results: The seven-item scale shows modest generalizability (.75) with individual scores. PCA revealed multicollinearity and singularity among scale items and we identified three potential items for removal. Reducing items for individual scores from seven to four (measuring Collaboration, Roles, Patient/Family-centeredness, and Conflict Management) changed scale generalizability from .75 to .73. Performance assessment with two cases is associated with reasonable generalizability (.73). Students in newly-formed interprofessional teams show a learning curve after one patient encounter. Team scores from a two-station TOSCE demonstrate low generalizability whether the scale consisted of four (.53) or seven items (.55). Conclusion: The four-item Modified McMaster-Ottawa scale for assessing individual performance in interprofessional teams retains the generalizability and validity of the seven-item scale. Observation of students in teams interacting with two different patients provides reasonably reliable ratings for giving feedback. The four-item scale has potential for assessing individual student skills and the impact of IPE curricula in clinical practice settings. Abbreviations: IPE: Interprofessional education; SP: Standardized patient; TOSCE: Team objective structured clinical encounte

    Cognitive decline in Huntington's disease expansion gene carriers

    No full text

    Reduced Cancer Incidence in Huntington's Disease: Analysis in the Registry Study

    No full text
    Background: People with Huntington's disease (HD) have been observed to have lower rates of cancers. Objective: To investigate the relationship between age of onset of HD, CAG repeat length, and cancer diagnosis. Methods: Data were obtained from the European Huntington's disease network REGISTRY study for 6540 subjects. Population cancer incidence was ascertained from the GLOBOCAN database to obtain standardised incidence ratios of cancers in the REGISTRY subjects. Results: 173/6528 HD REGISTRY subjects had had a cancer diagnosis. The age-standardised incidence rate of all cancers in the REGISTRY HD population was 0.26 (CI 0.22-0.30). Individual cancers showed a lower age-standardised incidence rate compared with the control population with prostate and colorectal cancers showing the lowest rates. There was no effect of CAG length on the likelihood of cancer, but a cancer diagnosis within the last year was associated with a greatly increased rate of HD onset (Hazard Ratio 18.94, p < 0.001). Conclusions: Cancer is less common than expected in the HD population, confirming previous reports. However, this does not appear to be related to CAG length in HTT. A recent diagnosis of cancer increases the risk of HD onset at any age, likely due to increased investigation following a cancer diagnosis
    corecore