18 research outputs found
Recommended from our members
Investigating Different Types of Assessment in Massive Open Online Courses
The current technological era has largely influenced the development of learning environments. As a result, there are new opportunities for teaching, learning and assessment. The emergence of Massive Open Online Courses (MOOCs) in particular, has attracted the attention of higher education institutions and course designers. MOOCs may provide the opportunity to thousands of students to learn from anywhere and at their convenience. Assessment is a component of the learning environment that drives student learning. However, only a small proportion of existing literature on assessment investigates its use for the enhancement of educational growth as most of the literature is concerned with how to use assessment for purposes of grading and ranking (Rowntree, 1987). Assessment has a double role in learning by both motivating students to study in order to undertake it, but also providing the necessary feedback on their performance so that students can track their learning progress (Rowntree, 1987).
Research in MOOCs is currently growing, focusing on different aspects such as the “questionable course quality, high dropout rate, unavailable course credits, complex copyright, limited hardware and ineffective assessments” (Chen, 2014). Assessment in MOOCs has been mostly investigated from a perspective that is looking at: how the grading load can be diminished by adopting automated techniques, the aims of each technique, and finally new potential approaches that will be able to assess high-level cognition. Summing up, researchers are currently testing tools that will be automatically scoring essays and giving feedback to learners in an effective way (see Balfour, 2013). However, the learners’ voice and standpoint about the different assessment types in the MOOCs context is inconclusive in the current literature and there is need for more research.
This study explores learners’ views on assessment types in Massive Open Online Courses, whether any of these has an impact on their enrolment and completion of a course and in what aspects each type of assessment is effective in supporting their learning experience. Auto-assessment, peer-assessment and self-assessment are the types under investigation as they are frequently used in MOOCs and therefore are the most commonly discussed in literature (see Balfour, 2013, Suen, 2013, Wilkowski et al, 2014). The study draws upon literature on assessment in general and on assessment in MOOCs in particular. The concept of online communities, i.e. the learners that appear in MOOCs will also be discussed in detail.
Online ethnographic approaches are employed to explore the issue in question by using online interviewing and observation methods. Thematic analysis is carried out using a sample of 12 MOOCs participants from online interviews and 13 posts of online observations. The outcome of this qualitative research study reveals that even though participants identify benefits in peer assessment, there is a preference for automated assessment since it is an already known, clear type of assessment for them. Moreover, self-assessment is not popular by participants. Learners’ comments also reveal that a clear guidance for assessment helps them to carry out peer assessment more effectively. Some learners also consider that the combination of assessment types may also have a positive effect on students’ learning as each of them serves a different purpose
Recommended from our members
How Digital Learning Processes Meet The Ever Changing Needs Of The Policing Profession? Enablers And Barriers In Its Application
This is a developmental paper discussing the topic of professional development through digital learning in the context of education and training of public servants. The fieldwork is undertaken with policing organizations, more specifically territorial forces in England and Wales and their national body for professional development. In particular, this empirical study explores the current application of digital learning in the police service across England and Wales and considers how digital learning processes meet the ever-changing needs of the policing profession in order to identify enablers and barriers in its application. The paper contributes to gaining new insights on digital learning in the context of public service organizations. This research raises awareness into the challenges faced by the police in managing digital learning design and implementation to meet the changing demands of the policing service. The findings of this research can be applied in other workplace contexts where organisations require professionals to get upskilled through digital learning
Recommended from our members
MOOC Educators: Who They Are And How They Learn
This study set out to answer the following research questions: who teaches in Massive Open Online Courses (MOOCs) and how do these different educators learn to teach?
To do this, it utilised Tynjälä’s theoretical model of Integrative Pedagogy that brings together different elements of professional expertise. To this end, a ‘multiple case study’ was conducted, with a focus on teaching activities and who is involved in them, as well as on educators’ ‘processes of knowledge building’, and the forms of knowledge they integrate. The data comprised 28 interviews with professionals with teaching responsibilities in seven MOOCs on the subject of History and of Politics on the FutureLearn platform. The seven cases were analysed using different strategies (theoretical propositions, ground-up data, and rival explanations).
The analysis showed that the role of ‘educator’ is filled not only by those with the titles used by the FutureLearn platform, but also by other professionals who take pedagogical decisions. MOOC teaching activities are diverse, different from face-to-face teaching and it is difficult for them to be carried out by a single individual. Educators in different courses and different universities used diverse models of work practice, each of which had advantages and disadvantages. MOOC educators learned to teach effectively when they had a shared goal, worked in transparent ways and involved interdisciplinary teams in a timely manner.
These findings can help institutions and platforms to design better Continuing Professional Development programmes and, ultimately, more effective MOOC learning journeys. Drawing on this evidence, the original contribution to knowledge of this thesis is a new conceptualisation of who the educators of MOOCs are, developed by uncovering the roles of professionals who carry out teaching on these courses, the wide variety of teaching activities involved and the ways people learn to work towards these
Citizen science as interdisciplinary working
Citizen science is a growing trend in involving the public in different types of collaboration with scientists. The growth of this activity has consequences for data collection, data analysis and the way in which science is carried out. It also has a potential impact on what, and how, citizen scientists learn about science when engaged in such activities. The purpose of this research is to explore the practices adopted by participants in citizen science projects, and in particular the influence on learning for the participants in these projects which rely on technology to support collaboration. The growth of citizen science projects is occurring at the same time as a growth of interest in informal learning and both are supported by technology enhanced learning. To make best use of the rapidly growing area of citizen science in the development of learning, it needs to be studied as a newly developing interdisciplinary area, with the consequence of unravelling the mechanisms by which interdisciplinary collaboration takes place in these settings, and the identification of conditions which encourage or thwart learning
Recommended from our members
Citizen science as interdisciplinary working
Citizen science is a growing trend in involving the public in different types of collaboration with scientists. The growth of this activity has consequences for data collection, data analysis and the way in which science is carried out. It also has a potential impact on what, and how, citizen scientists learn about science when engaged in such activities. The purpose of this research is to explore the practices adopted by participants in citizen science projects, and in particular the influence on learning for the participants in these projects which rely on technology to support collaboration.
The growth of citizen science projects is occurring at the same time as a growth of interest in informal learning and both are supported by technology enhanced learning.
To make best use of the rapidly growing area of citizen science in the development of learning, it needs to be studied as a newly developing interdisciplinary area, with the consequence of unravelling the mechanisms by which interdisciplinary collaboration takes place in these settings, and the identification of conditions which encourage or thwart learning
Recommended from our members
Interdisciplinary research in technology-enhanced learning: Strategies for effective working
This study proposes strategies for effective working in teams where more than one discipline or fields are involved and addresses one of the major challenges in the field, that of the development of interdisciplinary skills and knowledge. Data from ten in-depth interviews provides insights about: (a) how interdisciplinary teams in technology-enhanced learning collaborate, (b) the challenges and obstacles they face, and (c) strategies for effective working. The need for awareness about the challenges of interdisciplinarity work and respective training are emphasized as a means to promote effective working in interdisciplinary teams
Recommended from our members
Accessible learning, accessible analytics: a virtual evidence café
Learner accessibility is often thought of in terms of physical infrastructure or, in the case of online learning, guidelines for web design. Learning analytics offer a new set of possibilities for identifying and removing barriers to accessibility in learning environments. This is not simply a matter of designing analytics tools to be more accessible, for example by catering for learners who need extra time to respond, reducing cognitive load, or choosing an appropriate colour palette. When it comes to increasing access to learning opportunities for people with disabilities, solutions must be developed in the field of learning analytics. This workshop is a step towards developing those solutions. It will take the form of an evidence café, a structured event in which participants will be split into groups to discuss technical and pedagogic approaches to accessibility, as well as the barriers faced by disabled students and educators, and the associated challenges faced by those who design and research learning analytics. The intended outcomes of this workshop are to raise awareness of accessible learning and accessible analytics, and to build a community of researchers to lead future development in the area of accessible analytics
Guidance on how Learning at Scale can be made more accessible
While learning at scale has the potential to widen access to education, the accessibility of courses offered on Massive Open Online Course (MOOC) platforms has not been researched in depth. This paper begins to fill that gap. Data was gathered using the participatory ‘Evidence Café’ method. Thematic analysis identified characteristics of accessible courses on these platforms. These characteristics include elements of both technology and pedagogy. Capturing and analysing expert insights enables this paper to provide guidance on how online courses can be made more accessible. The findings suggest that course production teams need to work collaboratively with providers to address issues of accessibility and involve learners in design, testing and evaluation. Well-designed tutor-supported activities that follow web accessibility and usability guidelines are needed, as well as educator training on accessibility
Recommended from our members
Predictive learning analytics in online education: A deeper understanding through explaining algorithmic errors
Existing Predictive Learning Analytics (PLA) systems utilising machine learning models show they can improve teacher practice and, at the same time, student outcomes. The accuracy, and related errors, of these systems can negatively influence their adoption. However, little effort has been made to investigate the errors made by the underlying models. This study focused on errors of models predicting students at risk of not submitting their assignments. We analysed two groups of error when the model was confident about the prediction: (a) students predicted to submit their assignment, yet they did not (False Negative; FN), and (b) students predicted not to submit their assignment yet they did (False Positive; FP). We followed the principles of thematic analysis to analyse interview data from 27 students whose predictions presented FN or FP errors. Findings revealed the significance of unexpected events occurring during studies that can affect students' behaviour and cannot be foreseen and accounted for in PLA, such as changes in family and work responsibilities, unexpected health issues and computer problems. Interview data helped identify new data sources, which could be integrated into predictions to mitigate some of the errors, such as study loan application information. Some other sources, e.g. capturing student knowledge at the start of the course, would require changes in the learning design of courses. Our insights showcase the importance of complimenting AI-based systems with human intelligence. In our case, these were both the interviewed students providing insights, as well potential users of these systems, e.g. teachers, who are aware of contextual factors, invisible to ML algorithms. We discuss the implications for improving predictions, learning design and teacher training in using PLA in their practice
Recommended from our members
How HE educators learn to teach Massive Open Online Courses. A case study
People working within HE institutions need to learn new forms of teaching and learning practice, to transform the ways they work. This study explores the types of knowledge gained by those working in HE when they teach massive open online courses (MOOCs). Data were gathered through a case study involving interviews with six people with teaching roles on one MOOC. Data analysis used Tynjälä’s model of integrative pedagogy to identify the different types of theoretical, practical, sociocultural and self-regulative knowledge needed in order to teach in a MOOC. The analysis shows that individuals did not engage in formal training (theoretical knowledge); they learned by experience; by (re-)running the MOOC and from learners’ feedback (practical knowledge). They also reflected on their learning experience, on their contact with different cultures, on engaging with ideas from other MOOCs and people (self-regulative knowledge). They worked collaboratively, sharing expertise, but sometimes found communication with colleagues was difficult (sociocultural knowledge). When they faced challenges they integrated theoretical, practical and self-regulative knowledge to solve problems (mediating processes)