96,132 research outputs found
THE FIRST SEMESTER SUMMATIVE TEST ON READING COMPREHENSION FOR THE THIRD YEAR STUDENTS AT SLTP PGRI 01 KARANGPLOSO IN 2003
The purpose of this study is to investigate the quality of the summative test for the third year students on reading comprehension. The result of this study can be used by English teachers to improve the quality of the summative test. This study was conducted to answer the problems whether the quality of the summative test of the third year students is valid, reliable, practical, and have moderate on its level of difficulty? This study is descriptive qualitative because it describes the quality of the summative test of SLTP PGRI 01 Karangploso in 2003 which deals on its validity, reliability, practicality, and the moderate level of difficulty. The object of this study is the first semester summative test on reading comprehension of the third year students at SLTP PGRI 01 Karangploso in 2003. The data were obtained from the first semester of the third year studentsâ scores. Because the data are credential, the English teacher from SLTP PGRI 01 Karangploso only gave the writer a very limited data that is only one class. The writer analyzed the data by taking the result of the studentsâ tests as the document to administer the samples. After that she grouped the studentsâ scores to get the upper and lower group. The last is analyzing the summative test validity, reliability, practicality, level of difficulty, and the discrimination power. Based on the result on analysis the test is considered valid, reliable, practical. But some are rejected and some should be revised because they are too difficult. In the analysis level of difficulty the writer found out that 6 items should be revised and 8 items are rejected. Besides, the analysis of discrimination power found out that 4 items should be revised and 7 items are rejected. From the analysis above the writer assumed that the number of the test items appropriately used by the English teacher are five items out of 18 items
Recommended from our members
The Learning Grid and E-Assessment using Latent Semantic Analysis
E-assessment is an important component of e-learning and e-qualification. Formative and summative assessment serve different purposes and both types of evaluation are critical to the pedagogicalprocess. While students are studying, practicing, working, or revising, formative assessment provides direction, focus, and guidance. Summative assessment provides the means to evaluate a learner's achievement and communicate that achievement to interested parties. Latent Semantic Analysis (LSA) is a statistical method for inferring meaning from a text. Applications based on LSA exist that provide both summative and formative assessment of a learner's work. However, the huge computational needs are a major problem with this promising technique. This paper explains how LSA works, describes the breadth of existing applications using LSA, explains how LSA is particularly suited to e-assessment, and proposes research to exploit the potential computational power of the Grid to overcome one of LSA's drawbacks
Towards Security Goals in Summative E-Assessment Security
The general security goals of a computer system are known to include confidentiality, integrity and availability (C-I-A) which prevent critical assets from potential threats. The C-I-A security goals are well researched areas; however they may be insufficient to address all the needs of the summative e-assessment. In this paper, we do not discard the fundamental C-I-A security goals; rather we define security goals which are specific to summative e-assessment security
Analysing assessment practice in higher education: how useful is the summative/formative divide as a tool?
A view of assessment as 'naturally' divided into the categories of formative and summative has become a taken-for-granted way of thinking about, talking about and organising assessment in universities, at least in the UK where the division is inscribed in national, institutional and departmental policy and guidance (eg. Quality Assurance Agency, http://www.qaa.ac.uk). In these documents summative and formative assessment tend to be understood as serving separate purposes with summative assessment understood as summing up the level of performance and formative assessment as feeding into future learning. We question the utility of the division in terms of better understanding assessment practices on the basis of an empirical study undertaken in a higher education institution in the UK. The aim of the Assessment Environments & Cultures project is to gain a better understanding of how academics assess and why they assess in the ways that they do. Interview and observational data have been collected from academics working in three subject areas: Design, Business and Applied Sciences. Initial analysis has focussed on the discourses in use and the subject positions taken up by academics when they talk about and undertake assessment. Analysis of our data suggests that, whilst academics used the categories of formative and summative to talk about their assessment practices, the distinction between assessment purposes may be 'messier' than the separate categories imply. Various examples from the project will be introduced to illustrate this point. This raises a number of questions in terms of researching assessment practices that will be raised for discussion at the roundtable. For example:Might it be useful to understand formative and summative assessment as occupying a shared and contested space rather than as distinct categories
The use of computer-based assessments in a field biology module
Formative computer-based assessments (CBAs) for self-instruction were introduced into a Year-2 field biology module. These CBAs were provided in âtutorialâ mode where each question had context-related diagnostic feedback and tutorial pages, and a self-test mode where the same CBA returned only a score. The summative assessments remained unchanged and consisted of an unseen CBA and written reports of field investigations. When compared with the previous three year-cohorts, the mean score for the summative CBA increased after the introduction of formative CBAs, whereas mean scores for written reports did not change. It is suggested that the increase in summative CBA mean score reflects the effectiveness of the formative CBAs in widening the studentsâ knowledge base. Evaluation of all assessments using an Assessment Experience Questionnaire indicated that they satisfied the â11 conditions under which assessment supports student learningâ. Additionally, evidence is presented that the formative CBAs enhanced self-regulated student learning
âSummativeâ and âFormativeâ: Confused by the assessment terms?
The terms âformativeâ and âsummativeâ when linked to assessment can cause confusion. Should these terms be dropped? Should we move on from them? This paper argues that it is the common shortening of the full and meaningful terms, âassessment for formative purposesâ and âassessment for summative purposesâ that contributes to a confusion over assessments, information and methods, particularly for pre-service teachers and those with less teaching experience. By being well-informed about both purpose and assessment activity, teachers will have greater clarity in understanding, communication and practice regarding these important and useful concepts
Prospects for summative evaluation of CAL in higher education
Many developers and evaluators feel an external demand on them for summative evaluation of courseware. Problems soon emerge. One is that the CAL may not be used at all by students if it is not made compulsory. If one measures learning gains, how does one know that one is measuring the effect of the CAL or of the motivation in that situation? Such issues are the symptoms of the basic theoretical problem with summative evaluation, which is that CAL does not cause learning like turning on a tap, any more than a book does. Instead, it is one rather small factor in a complex situation. It is of course possible to do highly controlled experiments: for example to motivate the subjects in a standardized way. This should lead to measurements that are repeatable by other similar experiments. However they will be measurements that have little power to predict the outcome when the CAL is used in real courses. Hence the simple view of summative evaluation must be abandoned. Yet it is possible to gather useful information by studying how a piece of CAL is used in a real course and what the outcomes were. Although this does not guarantee the same outcomes for another purchaser, it is obviously useful to know that the CAL has been used successfully one or more times, and how it was used on those occasions. Such studies can also serve a different âintegrativeâ rather than summative function by pointing out failings of the CAL software and suggesting how to remedy them
Recommended from our members
A 2-Question Summative Score Correlates with the Maslach Burnout Inventory
Introduction: There is a high prevalence of burnout among emergency medicine (EM) residents. The Maslach Burnout Inventory - Human Services Survey (MBI-HSS) is a widely used tool to measure burnout. The objective of this study was to compare the MBI-HSS and a two-question tool to determine burnout in the EM resident population.Methods: Based on data from the 2017 National Emergency Medicine Resident Wellness Survey study, we determined the correlation between two single-item questions with their respective MBI subscales and the full MBI-HSS. We then compared a 2-Question Summative Score to the full MBI-HSS with respect to primary, more restrictive, and more inclusive definitions of burnout previously reported in the literature.Results: Of 1,522 residents who completed the survey 37.0% reported âI feel burned out from my work,â and 47.1% reported âI have become more callous toward people since I took this jobâ once a week or more (each item >3 on a scale of 0-6). A 2-Question Summative Score totaling >3 correlated most closely with the primary definition of burnout (Spearmanâs rho 0.65 [95% confidence interval 0.62-0.68]). Using the summative score, 77.7% of residents were identified as burned out, compared to 76.1% using the full MBI-HSS, with a sensitivity and specificity of 93.6% and 73.0%, respectively.Conclusion: An abbreviated 2-Question Summative Score correlates well with the full MBI-HSS tool in assessing EM resident physician burnout and could be considered a rapid screening tool to identify at-risk residents experiencing burnout
How convincing is alternative assessment for use in higher education?
The current preference for alternative assessment has been partly stimulated by recent evidence on learning which points to the importance of students' active engagement in the learning process. While alternative assessment may well fulfil the aims of formative assessment, its value in the summative assessment required in higher education is more problematic. If alternative assessment devices are to be used for summative purposes, the validity of alternative assessment has to be considered. The paper argues that task specification and marking consistency in alternative assessment can make comparability of performance difficult to effect, thereby leaving alternative assessment a less than convincing form for use in higher educatio
- âŠ