420 research outputs found
Recommended from our members
Investigating Different Types of Assessment in Massive Open Online Courses
The current technological era has largely influenced the development of learning environments. As a result, there are new opportunities for teaching, learning and assessment. The emergence of Massive Open Online Courses (MOOCs) in particular, has attracted the attention of higher education institutions and course designers. MOOCs may provide the opportunity to thousands of students to learn from anywhere and at their convenience. Assessment is a component of the learning environment that drives student learning. However, only a small proportion of existing literature on assessment investigates its use for the enhancement of educational growth as most of the literature is concerned with how to use assessment for purposes of grading and ranking (Rowntree, 1987). Assessment has a double role in learning by both motivating students to study in order to undertake it, but also providing the necessary feedback on their performance so that students can track their learning progress (Rowntree, 1987).
Research in MOOCs is currently growing, focusing on different aspects such as the “questionable course quality, high dropout rate, unavailable course credits, complex copyright, limited hardware and ineffective assessments” (Chen, 2014). Assessment in MOOCs has been mostly investigated from a perspective that is looking at: how the grading load can be diminished by adopting automated techniques, the aims of each technique, and finally new potential approaches that will be able to assess high-level cognition. Summing up, researchers are currently testing tools that will be automatically scoring essays and giving feedback to learners in an effective way (see Balfour, 2013). However, the learners’ voice and standpoint about the different assessment types in the MOOCs context is inconclusive in the current literature and there is need for more research.
This study explores learners’ views on assessment types in Massive Open Online Courses, whether any of these has an impact on their enrolment and completion of a course and in what aspects each type of assessment is effective in supporting their learning experience. Auto-assessment, peer-assessment and self-assessment are the types under investigation as they are frequently used in MOOCs and therefore are the most commonly discussed in literature (see Balfour, 2013, Suen, 2013, Wilkowski et al, 2014). The study draws upon literature on assessment in general and on assessment in MOOCs in particular. The concept of online communities, i.e. the learners that appear in MOOCs will also be discussed in detail.
Online ethnographic approaches are employed to explore the issue in question by using online interviewing and observation methods. Thematic analysis is carried out using a sample of 12 MOOCs participants from online interviews and 13 posts of online observations. The outcome of this qualitative research study reveals that even though participants identify benefits in peer assessment, there is a preference for automated assessment since it is an already known, clear type of assessment for them. Moreover, self-assessment is not popular by participants. Learners’ comments also reveal that a clear guidance for assessment helps them to carry out peer assessment more effectively. Some learners also consider that the combination of assessment types may also have a positive effect on students’ learning as each of them serves a different purpose
An Automated Grading and Feedback System for a Computer Literacy Course
Computer Science departments typically offer a computer literacy course that targets a general lay audience. At Appalachian State University, this course is CS1410 - Introduction to Computer Applications. computer literacy courses have students work with various desktop and web-based software applications, including standard office applications. CS1410 strives to have students use well known applications in new and challenging ways, as well as exposing them to some unfamiliar applications. These courses can draw large enrollments which impacts efficient and consistent grading. This thesis describes the development and successful deployment of the Automated Grading And Feedback (AGAF) system for CS1410. Specifically, a suite of automated grading tools targeting the different types of CS1410 assignments has been built. The AGAF system tools have been used on actual CS1410 submissions and the resulting grades were verified. AGAF tools exist for Microsoft Office assignments requiring students to upload a submission file. Another AGAF tool accepts a student “online text submission” where the text encodes the URL of a Survey Monkey survey and a blog. Other CS1410 assignments require students to upload an image file. AGAF can process images in multiple ways, including decoding of a QR two-dimensional barcode and identification of an expected image pattern
Supporting mediated peer-evaluation to grade answers to open-ended questions
We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade
D2.1 Analysis of existing MOOC platforms and services
The main objective of this task is to analyze features and services of MOOC platforms that are used in ECO and, secondly, in other commonly used MOOC platforms. This task takes into account the functionality that is required by the different pilots from two viewpoints: technological and pedagogical aspects. Firstly, to ensure this objective, this task performed a state-of-the-art review, mainly research papers and all annotated scientific literature. Secondly, we elaborate a Competitive Analysis Checklist for MOOC platforms. An approach based on technological and pedagogical items is suggested to define specific dimensions for this task. This Checklist will be a useful tool for evaluating MOOC platforms. Thirdly, five of the ECO platforms have been evaluated by using the authoring and delivery environment to check for the availability of features that are essential for the implementation of the pedagogical model as described in D2.1. It became clear that these platforms are not very suitable for the pedagogical model. Finally, a Guide for the Effective Creation of MOOCs has been drawn up indicating to assist course designers to compare the functionality, features, pedagogical and instructional advantages so they can choose the most suitable one for their areas of interest and needs.Part of the work carried out has been funded with support from the European Commission, under the ICT Policy Support Programme, as part of the Competitiveness and Innovation Framework Programme (CIP) in the ECO project under grant agreement n° 21127
ONLINE INTERACTIVE TOOL FOR LEARNING LOGIC
This dissertation presents the design and implementation of an online platform for solving
logic exercises, aimed at complementing theoretical classes for students of logicrelated
courses at the University of Nova Lisbon. The platform is integrated with a
Learning Management System (LMS) using the LTI protocol, allowing instructors to
grade students’ work.
We provide an overview of related literature and detailed explanations of each component
of the platform, including the design of logic exercises and their integration with
the LMS. Additionally, we discuss the challenges and difficulties faced during the development
process.
The main contributions of this work are the platform itself, a guide on integrating an
external tool with LTI, and the implementation of the tool with the LTI learning platform.
Our results and evaluations show that the platform is effective for enhancing online
learning experiences and improving assessment methods.
In conclusion, this dissertation provides a valuable resource for educational institutions
seeking to improve their online learning offerings and assessment practices.Esta dissertação apresenta o design e a implementação de uma plataforma online para
resolver exercícios de lógica, com o objetivo de complementar as aulas teóricas para estudantes
de cursos relacionados à lógica na Universidade de Nova Lisboa. A plataforma
está integrada a um Sistema de Gestão de Aprendizagem (SGA) usando o protocolo LTI,
permitindo que os instrutores avaliem o trabalho de seus alunos.
Oferecemos uma visão geral da literatura relacionada e explicações detalhadas de cada
componente da plataforma, incluindo o design dos exercícios de lógica e sua integração
com o SGA. Além disso, discutimos os desafios e dificuldades enfrentados durante o
processo de desenvolvimento.
As principais contribuições deste trabalho são a própria plataforma, um guia sobre
a integração de uma ferramenta externa com o LTI e a implementação da ferramenta na
plataforma de aprendizagem LTI.
Em conclusão, esta dissertação fornece um recurso valioso para as instituições educacionais
que buscam melhorar suas ofertas de aprendizagem online e práticas de avaliação
Ten Good Reasons to Adopt an Automated Formative Assessment Model for Learning and Teaching Mathematics and Scientific Disciplines
Abstract This paper will analyze an educational model for automated formative assessment developed at the Department of Mathematics of University of Turin for learning and teaching Mathematics and scientific disciplines. The model is provided through an automated grading system which, empowered by the engine of an advanced computing environment, allows the creation of algorithmic variables and open mathematical answers, recognized in all their equivalent forms. The adoption of automated formative assessment brings many advantages to learning. Easily available assignments, immediate feedback, adaptivity, the chance of learning from mistakes turn assessment into a fundamental enhancement in education; the intrinsic "rigidity" of technology can also have positive results on students' path to knowledge. Automated assessment brings innovation into teaching: time saved in grading can be used to improve materials and activities, teachers easily get information about students' learning, they need to change their approach and to attend trainings; sharing and collaboration among teachers are facilitated. Results obtained by the application of automated formative assessment in several class experiences are discussed and data about emerged satisfaction and criticisms are shown
Validation of Non-formal MOOC-based Learning: An Analysis of Assessment and Recognition Practices in Europe (OpenCred)
This report presents the outcomes of research, conducted between May 2014 and November 2015, into emerging practices in assessment, credentialisation and recognition in Massive Open Online Courses (MOOCs). Following extensive research on MOOCs in European Member States, it provides a snapshot of how European Higher Education Institutions (HEIs) recognise (or not) non-formal learning (particularly MOOC-based), and how some employers recognise open badges and MOOC certificates for continuing professional development. We analyse the relationship between forms of assessment used and credentials awarded, from badges for self-assessment to ECTS credits for on-site examinations, and consider the implications for recognition. Case studies provide deeper insights into existing practices. The report introduces a model which guides MOOC conveners in positioning and shaping their offers, and also helps institutions and employers to make recognition decisions. It concludes with a set of recommendations to European HEIs and policy makers to enable wider recognition of open learning in higher education and at the workplace.JRC.J.3-Information Societ
On the Quality of Crowdsourced Programming Assignments
Crowdsourcing has been used in computer science education to alleviate the teachers’ workload in creating course content, and as a learning and revision method for students through its use in educational systems. Tools that utilize crowdsourcing can act as a great way for students to further familiarize themselves with the course concepts, all while creating new content for their peers and future course iterations.
In this study, student-created programming assignments from the second week of an introductory Java programming course are examined alongside the peer reviews these assignments received. The quality of the assignments and the peer reviews is inspected, for example, through comparing the peer reviews with expert reviews using inter-rater reliability. The purpose of this study is to inspect what kinds of programming assignments novice students create, and whether the same novice students can act as reliable reviewers.
While it is not possible to draw definite conclusions from the results of this study due to limitations concerning the usability of the tool, the results seem to indicate that novice students are able to recognise differences in programming assignment quality, especially with sufficient guidance and well thought-out instructions
MOOCs: Expectations and Reality
This comprehensive study of MOOCs from the perspective of institutions of higher education includes an investigation of definitions and characteristics of MOOCs, their origins, institutional goals for developing and delivering MOOCs, how MOOC data is being used, a review of MOOC resource requirements and costs, and a compilation of ideas from 83 interviewees about MOOCs and the future of higher education. We identify six major goals for MOOC initiatives and assess the evidence regarding whether these goals are being met, or are likely to be in the future
- …