17 research outputs found
Recommended from our members
The Learning Grid and E-Assessment using Latent Semantic Analysis
E-assessment is an important component of e-learning and e-qualification. Formative and summative assessment serve different purposes and both types of evaluation are critical to the pedagogicalprocess. While students are studying, practicing, working, or revising, formative assessment provides direction, focus, and guidance. Summative assessment provides the means to evaluate a learner's achievement and communicate that achievement to interested parties. Latent Semantic Analysis (LSA) is a statistical method for inferring meaning from a text. Applications based on LSA exist that provide both summative and formative assessment of a learner's work. However, the huge computational needs are a major problem with this promising technique. This paper explains how LSA works, describes the breadth of existing applications using LSA, explains how LSA is particularly suited to e-assessment, and proposes research to exploit the potential computational power of the Grid to overcome one of LSA's drawbacks
Recommended from our members
Smart labs and social practice: social tools for pervasive laboratory workspaces: a position paper
The emergence of pervasive and ubiquitous computing stimulates a view of future work environments where sharing of information, data and knowledge is easy and commonplace, particularly in highly interactive settings. Much of the work in this area focuses on tool development to support activities such as data collection, data recording and sharing, and so on. We are interested in this kind of technical development, which is both challenging and essential for science communities. But we are also interested in a broader interpretation of knowledge sharing and the human/social side of tools we develop to support this. We are keen to know more about how groups of different kinds of scientists can make their work understandable and shareable with each other in a multidisciplinary setting. This is a complex task because boundaries and barriers can emerge between disciplines engendered by differences in discourses and practices, which may not easily translate into other discipline areas. In the worst case, there may be some hostility between disciplines, or at least doubt and scepticism. Nevertheless, sharing approaches to research, research expertise, data and methods across disciplines can be a very fruitful exercise, and encouragement to engage in this activity is particularly pertinent in the digital era. Issues of privacy and security are also key aspects – knowing when and how to release data or information to other groups is crucial to providing a safe environment for people to work, and there are several sensitivities to be explored here.
In this paper we describe an evolving situation that captures many of these issues, which we aim to track longitudinally
ProCAss: An intelligent assessment for computer programming corpus
Electronic assessment (e-assessment) becomes more
popular in educational tools especially in e-learning environment.This is because it has some advantages such as reducing the staffs needed for assessment tasks, automated marking is not prone to human error and it gives instant feedback to the students.However, the e-assessment existed only more on assessing essays and lack of implementation in assessing computer program in
computer science environment [Haley, et.al, 2003]. This paper will briefly describe on the techniques that have been used to implement e-assessment for essays such as Project Essay Grade (PEG), E-rater, Intelligent Essay Assessor (IEA) and Latent Semantic Analysis (LSA).Mainly, the discussion will concentrate on LSA to build the eassessment
for computer programming corpus.One great
advantage of LSA over the others is its ability to make absolute relative comparisons, where set of documents can be compared to each other, or to an answer schema.Another reason is the computer program produces programs in a subset of English-like words, a bit similar to an essay [Haley, et.al, 2003].This paper will propose system design integrated with LSA method to assess student’s programming assignments. Then, the ability of
LSA algorithm in grading computer program corpus will be evaluated.The grading process will not limited on certain programming languages, but on any programming languages
Hacia un modelo de evaluación adaptativa personalizada basado en ontologÃas, contexto y filtrado colaborativo
La fase de evaluación desempeña un rol muy importante durante el proceso de enseñanza-aprendizaje de los estudiantes ya que a partir de esta se validan los conocimientos adquiridos por ellos y se descubren falencias y/o fortalezas. Sin embargo, la selección de preguntas por parte del profesor o de la plataforma de aprendizaje no responde a las necesidades, limitaciones y/o caracterÃsticas de los estudiantes. En este contexto, la incorporación de mecanismos que permita abstraer de mejor manera las caracterÃsticas del usuario para su posterior uso durante el proceso de selección de preguntas trae consigo beneficios como una mejor medición de los conocimientos, un incremento en el interés de los estudiantes, una mejor detección de falencias para la recomendación de nuevos recursos, entre otros. Con el objetivo de realizar una selección de preguntas que responda de mejor manera a las necesidades de los estudiantes, este artÃculo realiza una caracterización de las técnicas y modelos más relevantes para la selección de preguntas. De igual manera, se propone un modelo ontológico de evaluación adaptativa personalizado apoyado en técnicas de Inteligencia Artificial que incorpora información cognitiva y contextual relevante del estudiante para realizar una mejor selección y clasificación de preguntas durante el proceso de evaluación virtual
Modelo multi-agente para la evaluación y el diagnóstico de fallas en procesos de enseñanza-aprendizaje
En los ambientes virtuales de aprendizaje no existen actualmente mecanismos efectivos que permitan una detección temprana y diagnóstico de fallas en el aprendizaje. Integrar este tipo de elementos a los entornos virtuales de aprendizaje, podrÃa mejorar el aprendizaje ya que a partir del diagnóstico suministrado por el sistema se puede diseñar un plan de acciones que contribuya al fortalecimiento de las temáticas de los cursos. El objetivo de este artÃculo es presentar el diseño y desarrollo de un modelo multi-agente para la evaluación y el diagnóstico de fallas el cual busca descubrir las falencias en el aprendizaje a partir del proceso de evaluación virtual. Adicionalmente, el modelo busca ofrecer una retroalimentación y recomendar nuevos contenidos educativos adaptados a los perfiles de los aprendices. Basado en el modelo propuesto, un prototipo fue implementado y validado a través de un caso de estudio. Los resultados obtenidos permiten concluir que los estudiantes se sintieron acompañados durante el proceso de evaluación y obtuvieron una retroalimentación en tiempo real que identificó falencias y permitió recomendar recursos educativos con el fin de fortalecer el proceso de aprendizaje
Recommended from our members
Applying latent semantic analysis to computer assisted assessment in the Computer Science domain: a framework, a tool, and an evaluation
This dissertation argues that automated assessment systems can be useful for both students and educators provided that the results correspond well with human markers. Thus, evaluating such a system is crucial. I present an evaluation framework and show how and why it can be useful for both producers and consumers of automated assessment systems. The framework is a refinement of a research taxonomy that came out of the effort to analyse the literature review of systems based on Latent Semantic Analysis (LSA), a statistical natural language processing technique that has been used for automated assessment of essays. The evaluation framework can help developers publish their results in a format that is comprehensive, relatively compact, and useful to other researchers.
The thesis claims that, in order to see a complete picture of an automated assessment system, certain pieces must be emphasised. It presents the framework as a jigsaw puzzle whose pieces join together to form the whole picture.
The dissertation uses the framework to compare the accuracy of human markers and EMMA, the LSA-based assessment system I wrote as part of this dissertation. EMMA marks short, free text answers in the domain of computer science. I conducted a study of five human markers and then used the results as a benchmark against which to evaluate EMMA. An integral part of the evaluation was the success metric. The standard inter-rater reliability statistic was not useful; I located a new statistic and applied it to the domain of computer assisted assessment for the first time, as far as I know.
Although EMMA exceeds human markers on a few questions, overall it does not achieve the same level of agreement with humans as humans do with each other. The last chapter maps out a plan for further research to improve EMMA