337 research outputs found

    The Motivational Thought Frequency scales for increased physical activity and reduced high-energy snacking

    Get PDF
    The Motivational Thought Frequency (MTF) Scale has previously demonstrated a coherent four-factor internal structure (Intensity, Incentives Imagery, Self-Efficacy Imagery, Availability) in control of alcohol and effective self-management of diabetes. The current research tested the factorial structure and concurrent associations of versions of the MTF for increasing physical activity (MTF-PA) and reducing high-energy snacks (MTF-S).Study 1 examined the internal structure of the MTF-PA and its concurrent relationship with retrospective reports of vigorous physical activity. Study 2 attempted to replicate these results, also testing the internal structure of the MTF-S and examining whether higher MTF-S scores were found in participants scoring more highly on a screening test for eating disorder.In Study 1, 626 participants completed the MTF-PA online and reported minutes of activity in the previous week. In Study 2, 313 participants undertook an online survey that also included the MTF-S and the Eating Attitudes Test (EAT-26).The studies replicated acceptable fit for the four-factor structure on the MTF-PA and MTF-S. Significant associations of the MTF-PA with recent vigorous activity and of the MTF-S with EAT-26 scores were seen, although associations were stronger in Study 1.Strong preliminary support for both the MTF-PA and MTF-S was obtained, although more data on their predictive validity are needed. Associations of the MTF-S with potential eating disorder illustrate that high scores may not always be beneficial to health maintenance

    Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity

    Full text link

    Mitigating the Effect of Language in the Assessment of Science:A study of English-language learners in primary classrooms in the United Kingdom

    Get PDF
    Children coming from homes where English is not the primary language constitute a significant and increasing proportion of classrooms worldwide. Providing these English language learners (ELLs) with equitable assessment opportunities is a challenge. We analyse the performance of 485 students, both English native speakers and ELLs, across 5 schools within the UK in the 7-11 year age group on standardized Science assessment tasks. Logistic regression with random effects assesses the impact of English language proficiency, and its interactions with question traits, on performance. Traits investigated were: question focus; need for active language production; presence/absence of visuals; and question difficulty. Results demonstrated that, while ELLs persistently performed more poorly, the gap to their native speaking peers depended significantly upon assessment traits. ELLs were particularly disadvantaged when responses required active language production and/or when assessed on specific scientific vocabulary. Visual prompts did not help ELL performance. There was no evidence of an interaction between topic difficulty and language ability suggesting lower ELL performance is not related to capacity to understand advanced topics. We propose assessment should permit flexibility in language choice for ELLs with low English language proficiency; while simultaneously recommend subject-specific teaching of scientific language begins at lower stages of schooling

    Using Differential Item Functioning to evaluate potential bias in a high stakes postgraduate knowledge based assessment

    Get PDF
    BACKGROUND: Fairness is a critical component of defensible assessment. Candidates should perform according to ability without influence from background characteristics such as ethnicity or sex. However, performance differs by candidate background in many assessment environments. Many potential causes of such differences exist, and examinations must be routinely analysed to ensure they do not present inappropriate progression barriers for any candidate group. By analysing the individual questions of an examination through techniques such as Differential Item Functioning (DIF), we can test whether a subset of unfair questions explains group-level differences. Such items can then be revised or removed. METHODS: We used DIF to investigate fairness for 13,694 candidates sitting a major international summative postgraduate examination in internal medicine. We compared (a) ethnically white UK graduates against ethnically non-white UK graduates and (b) male UK graduates against female UK graduates. DIF was used to test 2773 questions across 14 sittings. RESULTS: Across 2773 questions eight (0.29%) showed notable DIF after correcting for multiple comparisons: seven medium effects and one large effect. Blinded analysis of these questions by a panel of clinician assessors identified no plausible explanations for the differences. These questions were removed from the question bank and we present them here to share knowledge of questions with DIF. These questions did not significantly impact the overall performance of the cohort. Group-level differences in performance between the groups we studied in this examination cannot be explained by a subset of unfair questions. CONCLUSIONS: DIF helps explore fairness in assessment at the question level. This is especially important in high-stakes assessment where a small number of unfair questions may adversely impact the passing rates of some groups. However, very few questions exhibited notable DIF so differences in passing rates for the groups we studied cannot be explained by unfairness at the question level
    corecore