333 research outputs found
Recommended from our members
Practitioners' views and barriers to implementation of the Keeping Birth Normal tool: A pilot study
Background:
Poor implementation of evidence in practice has been reported as a reason behind the continued rise in unnecessary interventions in labour and birth. A validated tool can enable the systematic measurement of care to target interventions to support implementation of evidence. The Keeping Birth Normal tool has been developed to measure and support implementation of evidence to reduce unnecessary interventions in labour and birth.
Aims:
This pilot sought the views of midwives about the usefulness and relevance of the Keeping Birth Normal tool in measuring and supporting practice; it also identified barriers to implementation.
Methods:
Five midwives supported by five preceptors tested the tool on a delivery suite and birth centre in a local NHS Trust. Mixed methods were employed. Participants completed a questionnaire about the relevance and usefulness of the tool. Semi-structured interviews explored participants' experience of using the tool in practice.
Findings:
The domains and items in the tool were viewed as highly relevant to reducing unnecessary interventions. Not all midwives were open to their practice being observed, but those who were reported benefits from critical reflection and role-modelling to support implementation. An important barrier is a lack of expertise among preceptors to support the implementation of skills to reduce unnecessary interventions. This includes skills in the use of rating scales and critical reflection. Where expertise is available, there is a lack of protected time for such structured supportive activity. Norms in birth environments that do not promote normal birth are another important barrier.
Conclusions:
Midwives found the items in the tool relevant to evidence-informed skills to reduce unnecessary interventions and useful for measuring and supporting implementation. To validate and generalise these findings, further evidence about the quality of items needs to be gathered. Successful implementation of the tool requires preceptors skilled in care that reduces unnecessary interventions, using rating scales, role-modelling and critical reflection. Such structured preceptorship requires protected time and can only thrive in a culture that promotes normal birth
The Motivational Thought Frequency scales for increased physical activity and reduced high-energy snacking
The Motivational Thought Frequency (MTF) Scale has previously demonstrated a coherent four-factor internal structure (Intensity, Incentives Imagery, Self-Efficacy Imagery, Availability) in control of alcohol and effective self-management of diabetes. The current research tested the factorial structure and concurrent associations of versions of the MTF for increasing physical activity (MTF-PA) and reducing high-energy snacks (MTF-S).Study 1 examined the internal structure of the MTF-PA and its concurrent relationship with retrospective reports of vigorous physical activity. Study 2 attempted to replicate these results, also testing the internal structure of the MTF-S and examining whether higher MTF-S scores were found in participants scoring more highly on a screening test for eating disorder.In Study 1, 626 participants completed the MTF-PA online and reported minutes of activity in the previous week. In Study 2, 313 participants undertook an online survey that also included the MTF-S and the Eating Attitudes Test (EAT-26).The studies replicated acceptable fit for the four-factor structure on the MTF-PA and MTF-S. Significant associations of the MTF-PA with recent vigorous activity and of the MTF-S with EAT-26 scores were seen, although associations were stronger in Study 1.Strong preliminary support for both the MTF-PA and MTF-S was obtained, although more data on their predictive validity are needed. Associations of the MTF-S with potential eating disorder illustrate that high scores may not always be beneficial to health maintenance
Using Differential Item Functioning to evaluate potential bias in a high stakes postgraduate knowledge based assessment
BACKGROUND: Fairness is a critical component of defensible assessment. Candidates should perform according to ability without influence from background characteristics such as ethnicity or sex. However, performance differs by candidate background in many assessment environments. Many potential causes of such differences exist, and examinations must be routinely analysed to ensure they do not present inappropriate progression barriers for any candidate group. By analysing the individual questions of an examination through techniques such as Differential Item Functioning (DIF), we can test whether a subset of unfair questions explains group-level differences. Such items can then be revised or removed. METHODS: We used DIF to investigate fairness for 13,694 candidates sitting a major international summative postgraduate examination in internal medicine. We compared (a) ethnically white UK graduates against ethnically non-white UK graduates and (b) male UK graduates against female UK graduates. DIF was used to test 2773 questions across 14 sittings. RESULTS: Across 2773 questions eight (0.29%) showed notable DIF after correcting for multiple comparisons: seven medium effects and one large effect. Blinded analysis of these questions by a panel of clinician assessors identified no plausible explanations for the differences. These questions were removed from the question bank and we present them here to share knowledge of questions with DIF. These questions did not significantly impact the overall performance of the cohort. Group-level differences in performance between the groups we studied in this examination cannot be explained by a subset of unfair questions. CONCLUSIONS: DIF helps explore fairness in assessment at the question level. This is especially important in high-stakes assessment where a small number of unfair questions may adversely impact the passing rates of some groups. However, very few questions exhibited notable DIF so differences in passing rates for the groups we studied cannot be explained by unfairness at the question level
Mitigating the Effect of Language in the Assessment of Science:A study of English-language learners in primary classrooms in the United Kingdom
Children coming from homes where English is not the primary language constitute a significant and increasing proportion of classrooms worldwide. Providing these English language learners (ELLs) with equitable assessment opportunities is a challenge. We analyse the performance of 485 students, both English native speakers and ELLs, across 5 schools within the UK in the 7-11 year age group on standardized Science assessment tasks. Logistic regression with random effects assesses the impact of English language proficiency, and its interactions with question traits, on performance. Traits investigated were: question focus; need for active language production; presence/absence of visuals; and question difficulty. Results demonstrated that, while ELLs persistently performed more poorly, the gap to their native speaking peers depended significantly upon assessment traits. ELLs were particularly disadvantaged when responses required active language production and/or when assessed on specific scientific vocabulary. Visual prompts did not help ELL performance. There was no evidence of an interaction between topic difficulty and language ability suggesting lower ELL performance is not related to capacity to understand advanced topics. We propose assessment should permit flexibility in language choice for ELLs with low English language proficiency; while simultaneously recommend subject-specific teaching of scientific language begins at lower stages of schooling
- âŠ