96,132 research outputs found

    THE FIRST SEMESTER SUMMATIVE TEST ON READING COMPREHENSION FOR THE THIRD YEAR STUDENTS AT SLTP PGRI 01 KARANGPLOSO IN 2003

    Get PDF
    The purpose of this study is to investigate the quality of the summative test for the third year students on reading comprehension. The result of this study can be used by English teachers to improve the quality of the summative test. This study was conducted to answer the problems whether the quality of the summative test of the third year students is valid, reliable, practical, and have moderate on its level of difficulty? This study is descriptive qualitative because it describes the quality of the summative test of SLTP PGRI 01 Karangploso in 2003 which deals on its validity, reliability, practicality, and the moderate level of difficulty. The object of this study is the first semester summative test on reading comprehension of the third year students at SLTP PGRI 01 Karangploso in 2003. The data were obtained from the first semester of the third year students’ scores. Because the data are credential, the English teacher from SLTP PGRI 01 Karangploso only gave the writer a very limited data that is only one class. The writer analyzed the data by taking the result of the students’ tests as the document to administer the samples. After that she grouped the students’ scores to get the upper and lower group. The last is analyzing the summative test validity, reliability, practicality, level of difficulty, and the discrimination power. Based on the result on analysis the test is considered valid, reliable, practical. But some are rejected and some should be revised because they are too difficult. In the analysis level of difficulty the writer found out that 6 items should be revised and 8 items are rejected. Besides, the analysis of discrimination power found out that 4 items should be revised and 7 items are rejected. From the analysis above the writer assumed that the number of the test items appropriately used by the English teacher are five items out of 18 items

    Towards Security Goals in Summative E-Assessment Security

    No full text
    The general security goals of a computer system are known to include confidentiality, integrity and availability (C-I-A) which prevent critical assets from potential threats. The C-I-A security goals are well researched areas; however they may be insufficient to address all the needs of the summative e-assessment. In this paper, we do not discard the fundamental C-I-A security goals; rather we define security goals which are specific to summative e-assessment security

    Analysing assessment practice in higher education: how useful is the summative/formative divide as a tool?

    Get PDF
    A view of assessment as 'naturally' divided into the categories of formative and summative has become a taken-for-granted way of thinking about, talking about and organising assessment in universities, at least in the UK where the division is inscribed in national, institutional and departmental policy and guidance (eg. Quality Assurance Agency, http://www.qaa.ac.uk). In these documents summative and formative assessment tend to be understood as serving separate purposes with summative assessment understood as summing up the level of performance and formative assessment as feeding into future learning. We question the utility of the division in terms of better understanding assessment practices on the basis of an empirical study undertaken in a higher education institution in the UK. The aim of the Assessment Environments & Cultures project is to gain a better understanding of how academics assess and why they assess in the ways that they do. Interview and observational data have been collected from academics working in three subject areas: Design, Business and Applied Sciences. Initial analysis has focussed on the discourses in use and the subject positions taken up by academics when they talk about and undertake assessment. Analysis of our data suggests that, whilst academics used the categories of formative and summative to talk about their assessment practices, the distinction between assessment purposes may be 'messier' than the separate categories imply. Various examples from the project will be introduced to illustrate this point. This raises a number of questions in terms of researching assessment practices that will be raised for discussion at the roundtable. For example:Might it be useful to understand formative and summative assessment as occupying a shared and contested space rather than as distinct categories

    The use of computer-based assessments in a field biology module

    Get PDF
    Formative computer-based assessments (CBAs) for self-instruction were introduced into a Year-2 field biology module. These CBAs were provided in ‘tutorial’ mode where each question had context-related diagnostic feedback and tutorial pages, and a self-test mode where the same CBA returned only a score. The summative assessments remained unchanged and consisted of an unseen CBA and written reports of field investigations. When compared with the previous three year-cohorts, the mean score for the summative CBA increased after the introduction of formative CBAs, whereas mean scores for written reports did not change. It is suggested that the increase in summative CBA mean score reflects the effectiveness of the formative CBAs in widening the students’ knowledge base. Evaluation of all assessments using an Assessment Experience Questionnaire indicated that they satisfied the ‘11 conditions under which assessment supports student learning’. Additionally, evidence is presented that the formative CBAs enhanced self-regulated student learning

    ‘Summative’ and ‘Formative’: Confused by the assessment terms?

    Get PDF
    The terms ‘formative’ and ‘summative’ when linked to assessment can cause confusion. Should these terms be dropped? Should we move on from them? This paper argues that it is the common shortening of the full and meaningful terms, ‘assessment for formative purposes’ and ‘assessment for summative purposes’ that contributes to a confusion over assessments, information and methods, particularly for pre-service teachers and those with less teaching experience. By being well-informed about both purpose and assessment activity, teachers will have greater clarity in understanding, communication and practice regarding these important and useful concepts

    Prospects for summative evaluation of CAL in higher education

    Get PDF
    Many developers and evaluators feel an external demand on them for summative evaluation of courseware. Problems soon emerge. One is that the CAL may not be used at all by students if it is not made compulsory. If one measures learning gains, how does one know that one is measuring the effect of the CAL or of the motivation in that situation? Such issues are the symptoms of the basic theoretical problem with summative evaluation, which is that CAL does not cause learning like turning on a tap, any more than a book does. Instead, it is one rather small factor in a complex situation. It is of course possible to do highly controlled experiments: for example to motivate the subjects in a standardized way. This should lead to measurements that are repeatable by other similar experiments. However they will be measurements that have little power to predict the outcome when the CAL is used in real courses. Hence the simple view of summative evaluation must be abandoned. Yet it is possible to gather useful information by studying how a piece of CAL is used in a real course and what the outcomes were. Although this does not guarantee the same outcomes for another purchaser, it is obviously useful to know that the CAL has been used successfully one or more times, and how it was used on those occasions. Such studies can also serve a different ‘integrative’ rather than summative function by pointing out failings of the CAL software and suggesting how to remedy them

    How convincing is alternative assessment for use in higher education?

    Get PDF
    The current preference for alternative assessment has been partly stimulated by recent evidence on learning which points to the importance of students' active engagement in the learning process. While alternative assessment may well fulfil the aims of formative assessment, its value in the summative assessment required in higher education is more problematic. If alternative assessment devices are to be used for summative purposes, the validity of alternative assessment has to be considered. The paper argues that task specification and marking consistency in alternative assessment can make comparability of performance difficult to effect, thereby leaving alternative assessment a less than convincing form for use in higher educatio
    • 

    corecore