10 research outputs found

    A formative evaluation of the South African Education and Environment Project Bridging Year Programme

    Get PDF
    Includes bibliographical references (leaves 89-94).Many learners from disadvantaged schools struggle to obtain entrance into tertiary institutions. A Bridging Year Programme (BYP) designed by the South African Education and Environment Project (SAEP) seeks to address this problem by offering intensive tuition to post-high school learners who have failed to gain sufficient points for entry into a tertiary institution. The BYP prepares those learners to re-write core National Senior Certificate (NSC) subjects and assists them in applying for entrance into a university or college. A formative evaluation was conducted to assess whether the programme is designed and implemented as intended and whether programme design and delivery takes into account evidence based practices, established in the literature for programmes of this nature. A review of programme records was undertaken, interviews were conducted with the programme manager and programme coordinator, and selfreport questionnaires were administered to course tutors and programme beneficiaries. The results of the evaluation indicate that while the programme has the necessary potential to set high standards of participation for beneficiaries and provide them with personalised attention, and while learners are generally positive about their experience, a number of limitations are evident. These include in particular: the need for better monitoring of learner compliance with their contractual obligation, improved quality assurance with regard to the teaching and learning programme, and tutor preparation and training. Recommendations for improved programme implementation, as well as monitoring of programme standards, learner participation and performance, and tutor quality are provided

    Evaluator characteristics and programme evaluability decisions: an exploratory study of evaluation practice in South Africa, Brazil, the United Kingdom, and the United States of America

    Get PDF
    Responding to recent calls in the literature for cross-country comparisons of evaluation practice, this simulation study investigated (a) evaluators' perspectives on what determines a programme's evaluability, (b) what criteria evaluators prioritise when assessing a programme's evaluability, and (c) the degree to which practice context (developing, developed, or both) and self-reported levels of evaluation experience predict programme evaluability decisions. Valid responses from evaluators practising in the United States of America (n = 94), the United Kingdom (n = 30), Brazil (n = 91) and South Africa (n = 45) were analysed. Q factor analyses using data collected via a Q Sort task revealed four empirically distinct evaluability perspectives. The dominant perspectives were labelled as theory-driven and utilisation-focused. Correspondence analyses demonstrated that participants used different criteria to assess the evaluability of three fictitious evaluation scenarios. Multinomial regression analyses confirmed that practice context and level of experience did not predict the type of evaluability criterion prioritised in any of the scenarios. Evaluators practising in developed countries were more likely to characterise a programme with robust structural features, unfavourable stakeholder characteristics, and unfavourable logistical conditions as evaluable with high difficulty than as evaluable with medium difficulty. Evaluators with limited experience were more likely than unlikely to embark on an evaluation of such a programme. This study represents the first empirical investigation of how evaluators from selected developed and developing countries assess programme evaluability

    The role of monitoring and evaluation in six South African reading programmes

    No full text
    In this article we focus on six reading programmes and ask: Do these programmes work insofar as they improve the reading ability of programme participants? We apply programme evaluation methods and content to these programmes to answer this question. Specifically, we use an approach that identifies the following five different levels of evaluation, namely: programme need; programme theory; programme process and implementation; programme outcome and impact; and programme cost and efficiency. We then add appropriate evaluation questions and research designs applicable to each level. We conclude by providing suggestions to reading programme staff on how to improve monitoring (data collection) in order to strengthen the evaluability of their reading programmes
    corecore