25 research outputs found

    Assessing Faculty and Student Interpretations of AACP Survey Items with Cognitive Interviewing

    Get PDF
    Objective. To use cognitive interviewing techniques to determine faculty and student interpretation of a subset of items from the AACP faculty and graduating student surveys. Methods. Students and faculty were interviewed individually in a private room. The interviewer asked each respondent for his/her interpretation of 15 randomly selected items from the graduating student survey or 20 items from the faculty survey. Results. While many items were interpreted consistently by respondents, the researchers identified several items that were either difficult to interpret or produced differing interpretations. Conclusion. Several interpretational inconsistencies and ambiguities were discovered that could compromise the usefulness of certain survey items

    Misuses of Regression and ANCOVA in Educational Research

    No full text

    P Value Problems

    No full text

    The Social Psychology of Biased Self-Assessment

    No full text
    Objective: To describe the psychological mechanisms that underlie biased self-assessment and suggest pedagogical techniques to counter them. Findings: Since the psychological mechanisms that underlie bias self-assessment occur below awareness, strategies that attempt to address bias directly are unlikely to succeed. A more effective approach may be to structure students’ learning experiences in ways that prevent the unconscious biasing mechanisms from operating efficiently. Summary: Given the importance of accurate self-knowledge for professional students and clinicians, as well as its difficulty to attain, an understanding of the psychological mechanisms that contribute the most common forms of biased self-assessment is essential for creating and implementing effective mitigation strategies

    The Application of Classification Trees to Pharmacy School Admissions

    No full text
    In recent years, the American Association of Colleges of Pharmacy (AACP) has encouraged the application of big data analytic techniques to pharmaceutical education. Indeed, the 2013-2014 Academic Affairs Committee Report included a Learning Analytics in Pharmacy Education section that reviewed the potential benefits of adopting big data techniques.1 Likewise, the 2014-2015 Argus Commission Report discussed uses for big data analytics in the classroom, practice, and admissions.2 While both of these reports were thorough, neither discussed specific analytic techniques. Consequently, this commentary will introduce classification trees, with a particular emphasis on their use in admission. With electronic applications, pharmacy schools and colleges now have access to detailed applicant records containing thousands of observations. With declining applications nationwide, admissions analytics may be more important than ever.3

    Assessing the Inter-Rater Reliability and Accuracy of Pharmacy Faculty\u27s Bloom\u27s Taxonomy Classifications

    No full text
    Objective To identify inter-rater reliability and accuracy of pharmacy faculty members\u27 classification of exam questions based on Bloom\u27s Taxonomy. Methods Faculty at a college of pharmacy was given six example exam questions to assign to the appropriate Bloom\u27s level. Results Inter-rater reliability and accuracy were both low at 0.25 and 46.0%, respectively. Accuracy increased to 81.8% when the six Bloom\u27s levels collapsed to three. Conclusions Both inter-rater reliability and accuracy were low. Faculty members\u27 misclassifications suggested a three-tier combination of the Bloom\u27s levels that would optimally improve accuracy: Knowledge, Comprehension/Application, and Analysis/Synthesis/Evaluation. Faculty development should also be considered in improving accuracy and reliability

    Comparing Student Performance on the Old vs New Versions of the Naplex

    No full text
    Objective. To determine if the new 2016 version of the North American Pharmacy Licensure Examination (NAPLEX) affected scores when controlling for student performance on other measures using data from one institution. Methods. There were 201 records from the classes of 2014-2016. Doubly robust estimation using weighted propensity scores was used to compare NAPLEX scaled scores and pass rates while considering student performance on other measures. Of the potential controllers of student performance: Pharmacy Curricular Outcomes Assessment (PCOA), scaled composite scores from the Pharmacy College Admission Test (PCAT), and P3 Grade Point Average (GPA). Only PCOA and P3 GPA were found to be appropriate for propensity scoring. Results. The weighted NAPLEX scaled scores did not significantly drop from the old (2014-2015) to the new (2016) version of NAPLEX. The change in pass rates between the new and old versions of NAPLEX were also non-significant. Conclusion. Using data from one institution, the new version itself of the NAPLEX did not have a significant effect on NAPLEX scores or first-time pass rates when controlling for student performance on other measures. Colleges are encouraged to repeat this analysis with pooled data and larger sample sizes

    The Validation of an OSCE Assessment to Measure Student Pharmacist Competencies of pre-APPE

    No full text
    Abstract available in the American Journal of Pharmacy Education

    Appendix 3: Comparison of Pharmacists’ Scoring of Fall Risk to Other Fall Risk Assessments

    No full text
    Deidentified participant case IDs linked to the deidentified pharmacists\u27 scoring. Appendix to the article: Panus, P.C., Covert, K.L., Odle, B.L., Karpen, S.C., Walls, Z.F., and Hall, C.D. (2022) Comparison of pharmacists\u27 scoring of fall risk to other fall risk assessments. Journal of the American Pharmacists Association, 62, 505-511. https://doi.org/10.1016/j.japh.2021.11.00
    corecore