50 research outputs found
Population‐based trends in invasive prenatal diagnosis for ultrasound‐based indications: two decades of change from 1994 to 2016
A retrospective population-based study of induction of labour trends and associated factors among aboriginal and non-aboriginal mothers in the northern territory between 2001 and 2012
Accuracy of postpartum haemorrhage data in the 2011 Victorian Perinatal Data Collection: Results of a validation study
Changes Over Time in Attitudes to Treatment and Survival Rate for Extremely Preterm Infants (23–27 Weeks' Gestational Age)
State‐wide utilization and performance of traditional and cell‐free DNA‐based prenatal testing pathways: the Victorian Perinatal Record Linkage (PeRL) study
Rasch scaling procedures for informing development of a valid Fetal Surveillance Education Program multiple-choice assessment
<p>Abstract</p> <p>Background</p> <p>It is widely recognised that deficiencies in fetal surveillance practice continue to contribute significantly to the burden of adverse outcomes. This has prompted the development of evidence-based clinical practice guidelines by the Royal Australian and New Zealand College of Obstetricians and Gynaecologists and an associated Fetal Surveillance Education Program to deliver the associated learning. This article describes initial steps in the validation of a corresponding multiple-choice assessment of the relevant educational outcomes through a combination of item response modelling and expert judgement.</p> <p>Methods</p> <p>The Rasch item response model was employed for item and test analysis and to empirically derive the substantive interpretation of the assessment variable. This interpretation was then compared to the hierarchy of competencies specified a priori by a team of eight subject-matter experts. Classical Test Theory analyses were also conducted.</p> <p>Results</p> <p>A high level of agreement between the hypothesised and derived variable provided evidence of construct validity. Item and test indices from Rasch analysis and Classical Test Theory analysis suggested that the current test form was of moderate quality. However, the analyses made clear the required steps for establishing a valid assessment of sufficient psychometric quality. These steps included: increasing the number of items from 40 to 50 in the first instance, reviewing ineffective items, targeting new items to specific content and difficulty gaps, and formalising the assessment blueprint in light of empirical information relating item structure to item difficulty.</p> <p>Conclusion</p> <p>The application of the Rasch model for criterion-referenced assessment validation with an expert stakeholder group is herein described. Recommendations for subsequent item and test construction are also outlined in this article.</p