2,008,348 research outputs found

    Evaluation - the educational context

    Get PDF
    Evaluation comes in many shapes and sizes. It can be as simple and as grounded in day to day work as a clinical teacher refl ecting on a lost teaching opportunity and wondering how to do it better next time or as complex, top down and politically charged as a major government led evaluation of use of teaching funds with the subtext of re-allocating them. Despite these multiple spectra of scale, perceived ownership, fi nancial and political implications, the underlying principles of evaluation are remarkably consistent. To evaluate well, it needs to be clear who is evaluating what and why. From this will come notions of how it needs to be done to ensure the evaluation is meaningful and useful. This paper seeks to illustrate what evaluation is, why it matters, where to start if you want to do it and how to deal with evaluation that is external and imposed

    Educational Program Evaluation Using CIPP Model

    Full text link
    There are many models of evaluation that can be used to evaluate a program. However, the most commonly used is the context, input, process, output (CIPP) evaluation models. CIPP evaluation model developed by Stufflebeam and Shinkfield in 1985. The evaluation context is used to give a rational reason a selected program or curriculum to be implemented. A wide scale, context can be evaluated on: the program's objectives, policies that support the vision and mission of the institution, the relevant environment, identification of needs, opportunities and problems specific diagnosis. Evaluation input to provide information about the resources that can be used to achieve program objectives. Evaluation inputs used to: find a problem solving strategy, planning, and design programs. Evaluation process serves to provide feedback to individuals to account for the activities of the program or curriculum. The evaluation process is conducted by: monitoring sources can potentially cause failure, prepare a preliminary information for planning decisions, and explain the process that actually happened. Product evaluation measure and interpret the achievement of goals. Evaluation of the products also come to: the measurement of the impact of the expected and unexpected. The evaluation is conducted: during and after the program. Stufflebeam and Shinkfield suggest product evaluation conducted for the four aspects of evaluation: impact, effectiveness, sustainability, and transportability. The decision making process is done by comparing the findings / facts contained in context, input, process and product standards or criteria that have been set previously

    Study Of Indicators And Criteria For Evaluating The Effectiveness And Prognostication Of Educational Activities Of General Educational Institutions

    Get PDF
    The analysis of the system of indicators / criteria for assessing the quality / effectiveness of educational activities of general educational institutions of different regions of Ukraine is carried out. A general description of the existing monitoring and evaluation studies of the problem of improving the quality / effectiveness of education at macro and micro levels is presented. The main groups of indicators of evaluation of the quality/effectiveness of educational activity of general educational institutions are presented, namely the indicators characterizing: – a general criterion for assessing the quality of educational activities of general education institutions, regarding the openness and accessibility of information about institutions; – a general criterion for assessing the quality of educational activities of general educational institutions, regarding the comfort of conditions in which educational activities are carried out; – a general criterion for assessing the quality of educational activities of general educational institutions, regarding the benevolence, courtesy and competence of pedagogical workers; – a general criterion for assessing the quality of educational activities of general educational institutions, as to the satisfaction with the quality of educational activities of general educational institutions

    Evaluating complex digital resources

    Get PDF
    Squires (1999) discussed the gap between HCI (Human Computer Interaction) and the educational computing communities in their very different approaches to evaluating educational software. This paper revisits that issue in the context of evaluating digital resources, focusing on two approaches to evaluation: an HCI and an educational perspective. Squires and Preece's HCI evaluation model is a predictive model ‐ it helps teachers decide whether or not to use educational software ‐ whilst our own concern is in evaluating the use of learning technologies. It is suggested that in part the different approaches of the two communities relate to the different focus that each takes: in HCI the focus is typically on development and hence usability, whilst in education the concern is with the learner and teacher use

    Evaluation of usage patterns for web-based educational systems using web mining

    Get PDF
    Virtual courses often separate teacher and student physically from one another, resulting in less direct feedback. The evaluation of virtual courses and other computer-supported educational systems is therefore of major importance in order to monitor student progress, guarantee the quality of the course and enhance the learning experience for the student. We present a technique for the usage evaluation of Web-based educational systems focussing on behavioural analysis, which is based on Web mining technologies. Sequential patterns are extracted from Web access logs and compared to expected behaviour

    Monitoring in educational development projects : the development of a monitoring system

    Get PDF
    Monitoring in education is usually focused on the monitoring of educational systems at different levels. Monitoring of educational projects receives only recently explicit attention. The paper discusses first the concepts of educational monitoring and evaluation. After that, the experience with developing a monitoring system in an educational development project is described as a case. These experiences, in combination with literature on project monitoring in other contexts, provide a rich source of ideas, lessons learned, and problems to avoid in designing project monitoring

    Methods for Evaluating Educational Programs – Does Writing Center Participation Affect Student Achievement?

    Get PDF
    This paper evaluates the eff ectiveness of the introduction of a Writing Center at a university. The center has the purpose to provide subject-specifi c courses that aim to improve students‘ abilities of scientifi c writing. In order to deal with presumed selfperceptional biases of students in feedback surveys, we use diff erent quantitative evaluation methods and compare the results to corresponding qualitative student surveys. Based on this evaluation, we present and discuss the validity of the approaches to evaluate educational programs. Although almost all students reported the writing courses to be helpful, we fi nd no signifi cant eff ect of course participation on students‘ grades. We attribute the diff erence in the results between quantitative methods and qualitative surveys to the inappropriateness of student course evaluations for assessing the eff ectiveness of educational measures.Performance evaluation; educational programs; student evaluation; empirical methods

    Evaluation of special educational needs parent partnership schemes [RB34]

    Get PDF
    corecore