840 research outputs found

    Cyclical quality assurance of examinations is critical but causality needs to be attributed carefully

    Get PDF
    The work of Khafagy and colleagues, reported in this issue, is a reminder of the need to undertake quality assurance activities for high stakes examinations, including the individual items that make up the examination. Quality assurance provides evidence to the candidates taking the examination, and to those who rely on its results, of its validity and reliability. It is also important for another, often forgotten, group of stakeholders: the item writers

    The dose-response relationship between training load and aerobic fitness in academy rugby union players

    Get PDF
    © 2018 Human Kinetics, Inc. Purpose: To identify the dose-response relationship between measures of training load (TL) and changes in aerobic fitness in academy rugby union players. Method: Training data from 10 academy rugby union players were collected during a 6-wk in-season period. Participants completed a lactate-threshold test that was used to assess VO 2 max, velocity at VO 2 max, velocity at 2 mmol/L (lactate threshold), and velocity at 4 mmol/L (onset of lactate accumulation; vOBLA) as measures of aerobic fitness. Internal-TL measures calculated were Banister training impulse (bTRIMP), Edwards TRIMP, Lucia TRIMP, individualized TRIMP (iTRIMP), and session RPE (sRPE). External-TL measures calculated were total distance, PlayerLoadℱ, high-speed distance > 15 km/h, very-high-speed distance > 18 km/h, and individualized high-speed distance based on each player’s vOBLA. Results: A second-order-regression (quadratic) analysis found that bTRIMP (R 2 = .78, P = .005) explained 78% of the variance and iTRIMP (R 2 = .55, P = .063) explained 55% of the variance in changes in VO 2 max. All other HR-based internal-TL measures and sRPE explained less than 40% of variance with fitness changes. External TL explained less than 42% of variance with fitness changes. Conclusions: In rugby players, bTRIMP and iTRIMP display a curvilinear dose-response relationship with changes in maximal aerobic fitness

    The predictive validity of the Living Goods selection tools for community health workers in Kenya : cohort study

    Get PDF
    Background Ensuring that selection processes for Community Health Workers (CHWs) are effective is important due to the scale and scope of modern CHW programmes. However they are relatively understudied. While community involvement in selection should never be eliminated entirely, there are other complementary methods that could be used to help identify those most likely to be high-performing CHWs. This study evaluated the predictive validity of three written tests and two individual sections of a one-to-one interview used for selection into CHW posts in eight areas of Kenya. Methods A cohort study of CHWs working for Living Goods in eight local areas of Kenya was undertaken. Data on the selection scores, post-training assessment scores and subsequent on-the-job performance (number of household and pregnancy registrations, number of child assessments, proportion of on-time follow-ups and value of goods sold) were obtained for 547 CHWs. Kendall’s tau-b correlations between each selection score and performance outcome were calculated. Results None of the correlations between selection scores and outcomes reached the 0.3 threshold of an “adequate” predictor of performance. Correlations were higher for the written components of the selection process compared to the interview components, with some small negative correlations found for the latter. Conclusions If the measures of performance included in this study are considered critical, then further work to develop the CHW selection tools is required. This could include modifying the content of both tools or increasing the length of the written tests to make them more reliable, for if a test is not reliable then it cannot be valid. Other important outcomes not included in this study are retention in post and quality of care. Other CHW programme providers should consider evaluating their own selection tools in partnership with research teams

    A library of logic models to explain how interventions to reduce diagnostic error work

    Get PDF
    OBJECTIVES: We aimed to create a library of logic models for interventions to reduce diagnostic error. This library can be used by those developing, implementing, or evaluating an intervention to improve patient care, to understand what needs to happen, and in what order, if the intervention is to be effective. METHODS: To create the library, we modified an existing method for generating logic models. The following five ordered activities to include in each model were defined: preintervention; implementation of the intervention; postimplementation, but before the immediate outcome can occur; the immediate outcome (usually behavior change); and postimmediate outcome, but before a reduction in diagnostic errors can occur. We also included reasons for lack of progress through the model. Relevant information was extracted about existing evaluations of interventions to reduce diagnostic error, identified by updating a previous systematic review. RESULTS: Data were synthesized to create logic models for four types of intervention, addressing five causes of diagnostic error in seven stages in the diagnostic pathway. In total, 46 interventions from 43 studies were included and 24 different logic models were generated. CONCLUSIONS: We used a novel approach to create a freely available library of logic models. The models highlight the importance of attending to what needs to occur before and after intervention delivery if the intervention is to be effective. Our work provides a useful starting point for intervention developers, helps evaluators identify intermediate outcomes, and provides a method to enable others to generate libraries for interventions targeting other errors

    Cost-effectiveness of health care service delivery interventions in low and middle income countries : a systematic review

    Get PDF
    Background Low and middle income countries (LMICs) face severe resource limitations but the highest burden of disease. There is a growing evidence base on effective and cost-effective interventions for these diseases. However, questions remain about the most cost-effective method of delivery for these interventions. We aimed to review the scope, quality, and findings of economic evaluations of service delivery interventions in LMICs. Methods We searched PUBMED, MEDLINE, EconLit, and NHS EED for studies published between 1st January 2000 and 30th October 2016 with no language restrictions. We included all economic evaluations that reported incremental costs and benefits or summary measures of the two such as an incremental cost effectiveness ratio. Studies were grouped by both disease area and outcome measure and permutation plots were completed for similar interventions. Quality was judged by the Drummond checklist. Results Overall, 3818 potentially relevant abstracts were identified of which 101 studies were selected for full text review. Thirty-seven studies were included in the final review. Twenty-three studies reported on interventions we classed as “changing by whom and where care was provided”, specifically interventions that entailed task-shifting from doctors to nurses or community health workers or from facilities into the community. Evidence suggests this type of intervention is likely to be cost-effective or cost-saving. Nine studies reported on quality improvement initiatives, which were generally found to be cost-effective. Quality and methods differed widely limiting comparability of the studies and findings. Conclusions There is significant heterogeneity in the literature, both methodologically and in quality. This renders further comparisons difficult and limits the utility of the available evidence to decision makers

    Formative evaluation of a training intervention for community health workers in South Africa : a before and after study

    Get PDF
    Background Community Health Workers (CHWs) have a crucial role in improving health in their communities and their role is being expanded in many parts of the world. However, the effectiveness of CHWs is limited by poor training and the education of CHWs has received little scientific attention. Methods Our study was carried out in two districts of KwaZulu-Natal, South Africa. We developed and piloted an inexpensive (two day) training intervention covering national government priorities: HIV/AIDS, sexually transmitted disease and Tuberculosis; and Women’s Sexual and Reproductive Health and Rights. Sixty-four CHWs consented to participate in the main study which measured knowledge gains using a modified Solomon design of four different testing schedules to distinguish between the effects of the intervention, testing and any interaction between intervention and testing. We also measured confidence, satisfaction and costs. Results Following the training intervention, improvements in knowledge scores were seen across topics and across districts. These changes in knowledge were statistically significant (p<0.001) and of large magnitude (over 45 percentage points or four standard deviations). However, the CHWs assigned to the test-test-train schedule in one district showed high gains in knowledge prior to receiving the training. All CHWs reported high levels of satisfaction with the training and marked improvements in their confidence in advising clients. The training cost around US$48 per CHW per day and has the potential to be cost-effective if the large gains in knowledge are translated into improved field-based performance and thus health outcomes. Conclusion Training CHWs can result in large improvements in knowledge with a short intervention. However, improvements seen in other studies could be due to test ‘reactivity’. Further work is needed to measure the generalisability of our results, retention of knowledge and the extent to which improved knowledge is translated into improved practice

    Revising ethical guidance for the evaluation of programmes and interventions not initiated by researchers

    Get PDF
    Public health and service delivery programmes, interventions and policies (collectively, ‘programmes’) are typically developed and implemented for the primary purpose of effecting change rather than generating knowledge. Nonetheless, evaluations of these programmes may produce valuable learning that helps determine effectiveness and costs as well as informing design and implementation of future programmes. Such studies might be termed ‘opportunistic evaluations’, since they are responsive to emergent opportunities rather than being studies of interventions that are initiated or designed by researchers. However, current ethical guidance and registration procedures make little allowance for scenarios where researchers have played no role in the development or implementation of a programme, but nevertheless plan to conduct a prospective evaluation. We explore the limitations of the guidance and procedures with respect to opportunistic evaluations, providing a number of examples. We propose that one key missing distinction in current guidance is moral responsibility: researchers can only be held accountable for those aspects of a study over which they have control. We argue that requiring researchers to justify an intervention, programme or policy that would occur regardless of their involvement prevents or hinders research in the public interest without providing any further protections to research participants. We recommend that trial consent and ethics procedures allow for a clear separation of responsibilities for the intervention and the evaluation

    Beyond synthesis: Augmenting systematic review procedures with practical principles to optimise impact and uptake in educational policy and practice

    Get PDF
    Whilst systematic reviews, meta-analyses and other forms of synthesis are often constructed as sitting proudly atop the hierarchy of research evidence, their limited impact on educational policy and practice has been criticised. In this article, we analyse why systematic reviews do not benefit users of evidence more consistently and suggest how review teams can optimise the impact of their work. We introduce the Beyond Synthesis Impact Chain (BSIC), an integrated framework of practical strategies for enhancing the impact of systematic reviews. Focusing upon examples from health professions education, we propose that review teams can optimise the impact of their work by employing strategies that 1) focus on practical problems and mindful planning in collaboration with users; 2) ensure reviews are relevant and syntheses reflexively account for users’ needs; and 3) couch reports in terms that resonate with users’ needs and increase access through targeted and strategic dissemination. We argue that combining practical principles with robust and transparent procedures can purposefully account for impact, and foster the uptake of review evidence in educational policy and practice. For systematic review teams, this paper offers strategies for enhancing the practical utility and potential impact of systematic reviews and other forms of synthesis
    • 

    corecore