224,743 research outputs found
Clinical audit project in undergraduate medical education curriculum: An assessment validation study
Objectives: To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework.
Methods: A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011-2014).
Results: The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes. Substantive validity in students’ and examiners’ response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP. There is evidence of high internal consistency reliability of CAP scores (Cronbach’s alpha \u3e 0.8) and inter-examiner consistency reliability (intra-class correlation\u3e0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct. Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates. Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use.
Conclusions: This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole
Recommended from our members
Practitioner Track Proceedings of the 6th International Learning Analytics & Knowledge Conference (LAK16)
Practitioners spearhead a significant portion of learning analytics, relying on implementation and experimentation rather than on traditional academic research. Both approaches help to improve the state of the art. The LAK conference has created a practitioner track for submissions, which first ran in 2015 as an alternative to the researcher track.
The primary goal of the practitioner track is to share thoughts and findings that stem from learning analytics project implementations. While both large and small implementations are considered, all practitioner track submissions are required to relate to initiatives that are designed for large-scale and/or long-term use (as opposed to research-focused initiatives). Other guidelines include:
• Implementation track record The project should have been used by an institution or have been deployed on a learning site. There are no hard guidelines about user numbers or how long the project has been running.
• Learning/education related Submissions have to describe work that addresses learning/academic analytics, either at an educational institution or in an area (such as corporate training, health care or informal learning) where the goal is to improve the learning environment or learning outcomes.
• Institutional involvement Neither submissions nor presentations have to include a named person from an academic institution. However, all submissions have to include information collected from people who have used the tool or initiative in a learning environment (such as faculty, students, administrators and trainees).
• No sales pitches While submissions from commercial suppliers are welcome; reviewers do not accept overt (or covert) sales pitches. Reviewers look for evidence that a presentation will take into account challenges faced, problems that have arisen, and/or user feedback that needs to be addressed.
Submissions are limited to 1,200 words, including an abstract, a summary of deployment with end users, and a full description. Most papers in the proceedings are therefore short, and often informal, although some authors chose to extend their papers once they had been accepted.
Papers accepted in 2016 fell into two categories.
• Practitioner Presentations Presentation sessions are designed to focus on deployment of a single learning analytics tool or initiative.
• Technology Showcase The Technology Showcase event enables practitioners to demonstrate new and emerging learning analytics technologies that they are piloting or deploying.
Both types of paper are included in these proceedings
Creating and validating self-efficacy scales for students
Purpose: student radiographers must possess certain abilities to progress in their training; these can be assessed in various ways. Bandura’s social cognitive theory identifies self-efficacy as a key psychological construct with regard to how people adapt to environments where new skills are developed. Use of this construct is common in health care literature but little has been noted within radiographic literature. The authors sought to develop a self-efficacy scale for student radiographers.
Method: the scale was developed following a standard format. An initial pool of 80 items was generated and psychometric analysis was used to reduce this to 68 items. Radiography students drawn from 7 universities were participants (N=198) in validating the scale.
Results: the psychometric properties of the scale were examined using analysis of variance (ANOVA), factor analysis and item analysis. ANOVA demonstrated an acceptable level of known group validity: first-year, second-year, and third-year students all scored significantly differently (P=.035) from one another. Factor analysis identified the most significant factor as confidence in image appraisal. The scale was refined using item and factor analysis to produce the final 25-item scale.
Conclusion This is the first published domain-specific self-efficacy scale validated specifically for student radiographers. In its current format it may have pedagogical utility. The authors currently are extending the work to add to the scale’s validity and embedding it into student training to assess its predictive value
RiPLE: Recommendation in Peer-Learning Environments Based on Knowledge Gaps and Interests
Various forms of Peer-Learning Environments are increasingly being used in
post-secondary education, often to help build repositories of student generated
learning objects. However, large classes can result in an extensive repository,
which can make it more challenging for students to search for suitable objects
that both reflect their interests and address their knowledge gaps. Recommender
Systems for Technology Enhanced Learning (RecSysTEL) offer a potential solution
to this problem by providing sophisticated filtering techniques to help
students to find the resources that they need in a timely manner. Here, a new
RecSysTEL for Recommendation in Peer-Learning Environments (RiPLE) is
presented. The approach uses a collaborative filtering algorithm based upon
matrix factorization to create personalized recommendations for individual
students that address their interests and their current knowledge gaps. The
approach is validated using both synthetic and real data sets. The results are
promising, indicating RiPLE is able to provide sensible personalized
recommendations for both regular and cold-start users under reasonable
assumptions about parameters and user behavior.Comment: 25 pages, 7 figures. The paper is accepted for publication in the
Journal of Educational Data Minin
- …