2,008,348 research outputs found
Evaluation - the educational context
Evaluation comes in many shapes and sizes. It can be as
simple and as grounded in day to day work as a clinical
teacher refl ecting on a lost teaching opportunity and
wondering how to do it better next time or as complex,
top down and politically charged as a major government
led evaluation of use of teaching funds with the subtext
of re-allocating them. Despite these multiple spectra
of scale, perceived ownership, fi nancial and political
implications, the underlying principles of evaluation are
remarkably consistent. To evaluate well, it needs to be
clear who is evaluating what and why. From this will
come notions of how it needs to be done to ensure the
evaluation is meaningful and useful. This paper seeks to
illustrate what evaluation is, why it matters, where to
start if you want to do it and how to deal with evaluation
that is external and imposed
Educational Program Evaluation Using CIPP Model
There are many models of evaluation that can be used to evaluate a program. However, the most commonly used is the context, input, process, output (CIPP) evaluation models. CIPP evaluation model developed by Stufflebeam and Shinkfield in 1985. The evaluation context is used to give a rational reason a selected program or curriculum to be implemented. A wide scale, context can be evaluated on: the program's objectives, policies that support the vision and mission of the institution, the relevant environment, identification of needs, opportunities and problems specific diagnosis. Evaluation input to provide information about the resources that can be used to achieve program objectives. Evaluation inputs used to: find a problem solving strategy, planning, and design programs. Evaluation process serves to provide feedback to individuals to account for the activities of the program or curriculum. The evaluation process is conducted by: monitoring sources can potentially cause failure, prepare a preliminary information for planning decisions, and explain the process that actually happened. Product evaluation measure and interpret the achievement of goals. Evaluation of the products also come to: the measurement of the impact of the expected and unexpected. The evaluation is conducted: during and after the program. Stufflebeam and Shinkfield suggest product evaluation conducted for the four aspects of evaluation: impact, effectiveness, sustainability, and transportability. The decision making process is done by comparing the findings / facts contained in context, input, process and product standards or criteria that have been set previously
Study Of Indicators And Criteria For Evaluating The Effectiveness And Prognostication Of Educational Activities Of General Educational Institutions
The analysis of the system of indicators / criteria for assessing the quality / effectiveness of educational activities of general educational institutions of different regions of Ukraine is carried out. A general description of the existing monitoring and evaluation studies of the problem of improving the quality / effectiveness of education at macro and micro levels is presented. The main groups of indicators of evaluation of the quality/effectiveness of educational activity of general educational institutions are presented, namely the indicators characterizing:
– a general criterion for assessing the quality of educational activities of general education institutions, regarding the openness and accessibility of information about institutions;
– a general criterion for assessing the quality of educational activities of general educational institutions, regarding the comfort of conditions in which educational activities are carried out;
– a general criterion for assessing the quality of educational activities of general educational institutions, regarding the benevolence, courtesy and competence of pedagogical workers;
– a general criterion for assessing the quality of educational activities of general educational institutions, as to the satisfaction with the quality of educational activities of general educational institutions
Evaluating complex digital resources
Squires (1999) discussed the gap between HCI (Human Computer Interaction) and the educational computing communities in their very different approaches to evaluating educational software. This paper revisits that issue in the context of evaluating digital resources, focusing on two approaches to evaluation: an HCI and an educational perspective. Squires and Preece's HCI evaluation model is a predictive model ‐ it helps teachers decide whether or not to use educational software ‐ whilst our own concern is in evaluating the use of learning technologies. It is suggested that in part the different approaches of the two communities relate to the different focus that each takes: in HCI the focus is typically on development and hence usability, whilst in education the concern is with the learner and teacher use
Evaluation of usage patterns for web-based educational systems using web mining
Virtual courses often separate teacher and student physically from one another, resulting in less direct feedback. The evaluation of virtual courses and other computer-supported educational systems is therefore of major importance in order to monitor student progress, guarantee the quality of the course and enhance the learning experience for the student. We present a technique for the usage evaluation of Web-based educational systems focussing on behavioural analysis, which is based on Web mining technologies. Sequential patterns are extracted from Web access logs and compared to expected behaviour
Monitoring in educational development projects : the development of a monitoring system
Monitoring in education is usually focused on the monitoring of educational systems at different levels. Monitoring of educational projects receives only recently explicit attention. The paper discusses first the concepts of educational monitoring and evaluation. After that, the experience with developing a monitoring system in an educational development project is described as a case. These experiences, in combination with literature on project monitoring in other contexts, provide a rich source of ideas, lessons learned, and problems to avoid in designing project monitoring
Methods for Evaluating Educational Programs – Does Writing Center Participation Affect Student Achievement?
This paper evaluates the eff ectiveness of the introduction of a Writing Center at a university. The center has the purpose to provide subject-specifi c courses that aim to improve students‘ abilities of scientifi c writing. In order to deal with presumed selfperceptional biases of students in feedback surveys, we use diff erent quantitative evaluation methods and compare the results to corresponding qualitative student surveys. Based on this evaluation, we present and discuss the validity of the approaches to evaluate educational programs. Although almost all students reported the writing courses to be helpful, we fi nd no signifi cant eff ect of course participation on students‘ grades. We attribute the diff erence in the results between quantitative methods and qualitative surveys to the inappropriateness of student course evaluations for assessing the eff ectiveness of educational measures.Performance evaluation; educational programs; student evaluation; empirical methods
Recommended from our members
Educational Technology Topic Guide
This guide aims to contribute to what we know about the relationship between educational technology (edtech) and educational outcomes by addressing the following overarching question: What is the evidence that the use of edtech, by teachers or students, impacts teaching and learning practices, or learning outcomes? It also offers recommendations to support advisors to strengthen the design, implementation and evaluation of programmes that use edtech.
We define edtech as the use of digital or electronic technologies and materials to support teaching and learning. Recognising that technology alone does not enhance learning, evaluations must also consider how programmes are designed and implemented, how teachers are supported, how communities are developed and how outcomes are measured (see http://tel.ac.uk/about-3/, 2014).
Effective edtech programmes are characterised by:
a clear and specific curriculum focus
the use of relevant curriculum materials
a focus on teacher development and pedagogy
evaluation mechanisms that go beyond outputs.
These findings come from a wide range of technology use including:
interactive radio instruction (IRI)
classroom audio or video resources accessed via teachers’ mobile phones
student tablets and eReaders
computer-assisted learning (CAL) to supplement classroom teaching.
However, there are also examples of large-scale investment in edtech – particularly computers for student use – that produce limited educational outcomes. We need to know more about:
how to support teachers to develop appropriate, relevant practices using edtech
how such practices are enacted in schools, and what factors contribute to or mitigate against
successful outcomes.
Recommendations:
1. Edtech programmes should focus on enabling educational change, not delivering technology. In doing so, programmes should provide adequate support for teachers and aim to capture changes in teaching practice and learning outcomes in evaluation.
2. Advisors should support proposals that further develop successful practices or that address gaps in evidence and understanding.
3. Advisors should discourage proposals that have an emphasis on technology over education, weak programmatic support or poor evaluation.
4. In design and evaluation, value-for-money metrics and cost-effectiveness analyses should be carried out
- …
