101,354 research outputs found
Student centred legal language study
The article introduces parts of a self-study programme for LLB (Europe) German students, which include the use of satellite TV and CALL. The whole self-study programme was tested for two years at the Nottingham Trent University. This paper focuses on the rationale of the study programme, pedagogical objectives and theoretical considerations within the context of language learning as well as the studentsâ evaluation. The evaluation shows that overall the package was seen as a positive learning experience. CALL can be a solution to the problem of limited materials for languages for specific purposes. The use of mixed media is possible for language teaching for specific purposes without having to be combined in multimedia computer-based programmes. CALL can also be a solution to the problems caused by reduced contact time
Towards robust computerised marking of free-text responses
This paper describes and exemplifies an application of AutoMark, a software system developed in pursuit of robust computerised marking of free-text answers to open-ended questions. AutoMark employs the techniques of Information Extraction to provide computerised marking of short free-text responses. The system incorporates a number of processing modules specifically aimed at providing robust marking in the face of errors in spelling, typing, syntax, and semantics. AutoMark looks for specific content within free-text answers, the content being specified in the form of a number of mark scheme templates. Each template represents one form of a valid (or a specifically invalid) answer. Student answers are first parsed, and then intelligently matched against each mark scheme template, and a mark for each answer is computed. The representation of the templates is such that they can be robustly mapped to multiple variations in the input text.
The current paper describes AutoMark for the first time, and presents the results of a brief quantitative and qualitative study of the performance of the system in marking a range of free-text responses in one of the most demanding domains: statutory national curriculum assessment of science for pupils at age 11. This particular domain has been chosen to help identify the strengths and weaknesses of the current system in marking responses where errors in spelling, syntax, and semantics are at their most frequent. Four items of varying degrees of open-endedness were selected from the 1999 tests.
These items are drawn from the real-world of so-called âhigh stakesâ testing experienced by cohorts of over half a million pupils in England each year since 1995 at ages 11 and 14. A quantitative and qualitative study of the performance of the system is provided, together with a discussion of the potential for further development in reducing these errors. The aim of this exploration is to reveal some of the issues which need to be addressed if computerised marking is to play any kind of reliable role in the future development of such test regimes
Service-oriented flexible and interoperable assessment: towards a standardised e-assessment system
Free-text answers assessment has been a field of interest during the last 50 years. Several free-text answers assessment tools underpinned by different techniques have been developed. In most cases, the complexity of the underpinned techniques has caused those tools to be designed and developed as stand-alone tools. The rationales behind using computers to assist learning assessment are mainly to save time and cost, as well as to reduce staff workload. However, utilising free-text answers assessment tools separately form the learning environment may increase the staff workload and increase the complexity of the assessment process. Therefore, free-text answers scorers have to have a flexible design to be integrated within the context of the e-assessment system architectures taking advantages of software-as-a-service architecture. Moreover, flexible and interoperable e-assessment architecture has to be utilised in order to facilitate this integration. This paper discusses the importance of flexible and interoperable e-assessment. Moreover, it proposes a service-oriented flexible and interoperable architecture for futuristic e-assessment systems. Nevertheless, it shows how such architecture can foster the e-assessment process in general and the free-text answers assessment in particular
Surveying Persons with Disabilities: A Source Guide (Version 1)
As a collaborator with the Cornell Rehabilitation Research and Training Center on Disability Demographics and Statistics, Mathematica Policy Research, Inc. has been working on a project that identifies the strengths and limitations in existing disability data collection in both content and data collection methodology. The intended outcomes of this project include expanding and synthesizing knowledge of best practices and the extent existing data use those practices, informing the development of data enhancement options, and contributing to a more informed use of existing data. In an effort to provide the public with an up-to-date and easily accessible source of research on the methodological issues associated with surveying persons with disabilities, MPR has prepared a Source Guide of material related to this topic. The Source Guide contains 150 abstracts, summaries, and references, followed by a Subject Index, which cross references the sources from the Reference List under various subjects. The Source Guide is viewed as a âliving document,â and will be periodically updated
Applying science of learning in education: Infusing psychological science into the curriculum
The field of specialization known as the science of learning is not, in fact, one field. Science of learning is a term that serves as an umbrella for many lines of research, theory, and application. A term with an even wider reach is Learning Sciences (Sawyer, 2006). The present book represents a sliver, albeit a substantial one, of the scholarship on the science of learning and its application in educational settings (Science of Instruction, Mayer 2011). Although much, but not all, of what is presented in this book is focused on learning in college and university settings, teachers of all academic levels may find the recommendations made by chapter authors of service. The overarching theme of this book is on the interplay between the science of learning, the science of instruction, and the science of assessment (Mayer, 2011). The science of learning is a systematic and empirical approach to understanding how people learn. More formally, Mayer (2011) defined the science of learning as the âscientific study of how people learnâ (p. 3). The science of instruction (Mayer 2011), informed in part by the science of learning, is also on display throughout the book. Mayer defined the science of instruction as the âscientific study of how to help people learnâ (p. 3). Finally, the assessment of student learning (e.g., learning, remembering, transferring knowledge) during and after instruction helps us determine the effectiveness of our instructional methods. Mayer defined the science of assessment as the âscientific study of how to determine what people knowâ (p.3). Most of the research and applications presented in this book are completed within a science of learning framework. Researchers first conducted research to understand how people learn in certain controlled contexts (i.e., in the laboratory) and then they, or others, began to consider how these understandings could be applied in educational settings. Work on the cognitive load theory of learning, which is discussed in depth in several chapters of this book (e.g., Chew; Lee and Kalyuga; Mayer; Renkl), provides an excellent example that documents how science of learning has led to valuable work on the science of instruction. Most of the work described in this book is based on theory and research in cognitive psychology. We might have selected other topics (and, thus, other authors) that have their research base in behavior analysis, computational modeling and computer science, neuroscience, etc. We made the selections we did because the work of our authors ties together nicely and seemed to us to have direct applicability in academic settings
Reviews
Brian Clegg, Mining The Internet â Information Gathering and Research on the Net, Kogan Page: London, 1999. ISBN: 0â7494â3025â7. Paperback, 147 pages, ÂŁ9.99
Recommended from our members
Scholarly insight Spring 2018: a Data wrangler perspective
In the movie classic Back to the Future a young Michael J. Fox is able to explore the past by a time machine developed by the slightly bizarre but exquisite Dr Brown. Unexpectedly by some small intervention the course of history was changed a bit along Foxâs adventures. In this fourth Scholarly Insight Report we have explored two innovative approaches to learn from OU data of the past, which hopefully in the future will make a large difference in how we support our students and design and implement our teaching and learning practices. In Chapter 1, we provide an in-depth analysis of 50 thousands comments expressed by students through the Student Experience on a Module (SEAM) questionnaire. By analysing over 2.5 million words using big data approaches, our Scholarly insights indicate that not all student voices are heard. Furthermore, our big data analysis indicate useful potential insights to explore how student voices change over time, and for which particular modules emergent themes might arise.
In Chapter 2 we provide our second innovative approach of a proof-of-concept of qualification path way using graph approaches. By exploring existing data of one qualification (i.e., Psychology), we show that students make a range of pathway choices during their qualification, some of which are more successful than others. As highlighted in our previous Scholarly Insight Reports, getting data from a qualification perspective within the OU is a difficult and challenging process, and the proof-of-concept provided in Chapter 2 might provide a way forward to better understand and support the complex choices our students make.
In Chapter 3, we provide a slightly more practically-oriented and perhaps down to earth approach focussing on the lessons-learned with Analytics4Action. Over the last four years nearly a hundred modules have worked with more active use of data and insights into module presentation to support their students. In Chapter 3 several good-practices are described by the LTI/TEL learning design team, as well as three innovative case-studies which we hope will inspire you to try something new as well.
Working organically in various Faculty sub-group meetings and LTI Units and in a google doc with various key stakeholders in the Faculties, we hope that our Scholarly insights can help to inform our staff, but also spark some ideas how to further improve our module designs and qualification pathways. Of course we are keen to hear what other topics require Scholarly insight. We hope that you see some potential in the two innovative approaches, and perhaps you might want to try some new ideas in your module. While a time machine has not really been invented yet, with the increasing rich and fine-grained data about our students and our learning practices we are getting closer to understand what really drives our students
A scoring rubric for automatic short answer grading system
During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing studentsâ learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the string-based similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 studentâs answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error () value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method
- âŠ