224 research outputs found

    The Motivational Thought Frequency scales for increased physical activity and reduced high-energy snacking

    Get PDF
    The Motivational Thought Frequency (MTF) Scale has previously demonstrated a coherent four-factor internal structure (Intensity, Incentives Imagery, Self-Efficacy Imagery, Availability) in control of alcohol and effective self-management of diabetes. The current research tested the factorial structure and concurrent associations of versions of the MTF for increasing physical activity (MTF-PA) and reducing high-energy snacks (MTF-S).Study 1 examined the internal structure of the MTF-PA and its concurrent relationship with retrospective reports of vigorous physical activity. Study 2 attempted to replicate these results, also testing the internal structure of the MTF-S and examining whether higher MTF-S scores were found in participants scoring more highly on a screening test for eating disorder.In Study 1, 626 participants completed the MTF-PA online and reported minutes of activity in the previous week. In Study 2, 313 participants undertook an online survey that also included the MTF-S and the Eating Attitudes Test (EAT-26).The studies replicated acceptable fit for the four-factor structure on the MTF-PA and MTF-S. Significant associations of the MTF-PA with recent vigorous activity and of the MTF-S with EAT-26 scores were seen, although associations were stronger in Study 1.Strong preliminary support for both the MTF-PA and MTF-S was obtained, although more data on their predictive validity are needed. Associations of the MTF-S with potential eating disorder illustrate that high scores may not always be beneficial to health maintenance

    Linking tests of English for academic purposes to the CEFR: the score user’s perspective

    Get PDF
    The Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claims made for four internationally recognised tests of English widely used in university admissions. It uses the Council of Europe’s (2009) suggested stages of specification, standard setting, and empirical validation to frame an evaluation of the extent to which, in this context, the CEFR has fulfilled its potential to “facilitate comparisons between different systems of qualifications.” Findings show that testing agencies make little use of CEFR categories to explain test content; represent the relationships between their tests and the framework in different terms; and arrive at conflicting conclusions about the correspondences between test scores and CEFR levels. This raises questions about the capacity of the CEFR to communicate competing views of a test construct within a coherent overarching structure

    Rasch scaling procedures for informing development of a valid Fetal Surveillance Education Program multiple-choice assessment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>It is widely recognised that deficiencies in fetal surveillance practice continue to contribute significantly to the burden of adverse outcomes. This has prompted the development of evidence-based clinical practice guidelines by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists and an associated Fetal Surveillance Education Program to deliver the associated learning. This article describes initial steps in the validation of a corresponding multiple-choice assessment of the relevant educational outcomes through a combination of item response modelling and expert judgement.</p> <p>Methods</p> <p>The Rasch item response model was employed for item and test analysis and to empirically derive the substantive interpretation of the assessment variable. This interpretation was then compared to the hierarchy of competencies specified a priori by a team of eight subject-matter experts. Classical Test Theory analyses were also conducted.</p> <p>Results</p> <p>A high level of agreement between the hypothesised and derived variable provided evidence of construct validity. Item and test indices from Rasch analysis and Classical Test Theory analysis suggested that the current test form was of moderate quality. However, the analyses made clear the required steps for establishing a valid assessment of sufficient psychometric quality. These steps included: increasing the number of items from 40 to 50 in the first instance, reviewing ineffective items, targeting new items to specific content and difficulty gaps, and formalising the assessment blueprint in light of empirical information relating item structure to item difficulty.</p> <p>Conclusion</p> <p>The application of the Rasch model for criterion-referenced assessment validation with an expert stakeholder group is herein described. Recommendations for subsequent item and test construction are also outlined in this article.</p

    Creating an Instrument to Measure Student Response to Instructional Practices

    Full text link
    BackgroundCalls for the reform of education in science, technology, engineering, and mathematics (STEM) have inspired many instructional innovations, some research based. Yet adoption of such instruction has been slow. Research has suggested that students’ response may significantly affect an instructor’s willingness to adopt different types of instruction.PurposeWe created the Student Response to Instructional Practices (StRIP) instrument to measure the effects of several variables on student response to instructional practices. We discuss the step‐by‐step process for creating this instrument.Design/MethodThe development process had six steps: item generation and construct development, validity testing, implementation, exploratory factor analysis, confirmatory factor analysis, and instrument modification and replication. We discuss pilot testing of the initial instrument, construct development, and validation using exploratory and confirmatory factor analyses.ResultsThis process produced 47 items measuring three parts of our framework. Types of instruction separated into four factors (interactive, constructive, active, and passive); strategies for using in‐class activities into two factors (explanation and facilitation); and student responses to instruction into five factors (value, positivity, participation, distraction, and evaluation).ConclusionsWe describe the design process and final results for our instrument, a useful tool for understanding the relationship between type of instruction and students’ response.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136692/1/jee20162_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136692/2/jee20162.pd
    corecore