3,723 research outputs found
Dropout Model Evaluation in MOOCs
The field of learning analytics needs to adopt a more rigorous approach for
predictive model evaluation that matches the complex practice of
model-building. In this work, we present a procedure to statistically test
hypotheses about model performance which goes beyond the state-of-the-practice
in the community to analyze both algorithms and feature extraction methods from
raw data. We apply this method to a series of algorithms and feature sets
derived from a large sample of Massive Open Online Courses (MOOCs). While a
complete comparison of all potential modeling approaches is beyond the scope of
this paper, we show that this approach reveals a large gap in dropout
prediction performance between forum-, assignment-, and clickstream-based
feature extraction methods, where the latter is significantly better than the
former two, which are in turn indistinguishable from one another. This work has
methodological implications for evaluating predictive or AI-based models of
student success, and practical implications for the design and targeting of
at-risk student models and interventions
A Brief Survey of Deep Learning Approaches for Learning Analytics on MOOCs
Massive Open Online Course (MOOC) systems have become
prevalent in recent years and draw more attention, a.o., due to the coronavirus
pandemic’s impact. However, there is a well-known higher chance
of dropout from MOOCs than from conventional off-line courses. Researchers
have implemented extensive methods to explore the reasons
behind learner attrition or lack of interest to apply timely interventions.
The recent success of neural networks has revolutionised extensive Learning
Analytics (LA) tasks. More recently, the associated deep learning
techniques are increasingly deployed to address the dropout prediction
problem. This survey gives a timely and succinct overview of deep learning
techniques for MOOCs’ learning analytics. We mainly analyse the
trends of feature processing and the model design in dropout prediction,
respectively. Moreover, the recent incremental improvements over
existing deep learning techniques and the commonly used public data
sets have been presented. Finally, the paper proposes three future research
directions in the field: knowledge graphs with learning analytics,
comprehensive social network analysis, composite behavioural analysis
Open Issues, Research Challenges, and Survey on Education Sector in India and Exploring Machine Learning Algorithm to Mitigate These Challenges
The nation's core sector is education. But dealing with problems in educational institutions, particularly in higher education, is a challenging task. The growth of education and technology has led to a number of research challenges that have attracted significant attention as well as a notable increase in the amount of data available in academic databases. Higher education institutions today are worried about outcome-based education and various techniques to assess a student's knowledge level or capacity for learning. In general, there are more contributors in the academic field than there are authors. Research is being done in this field to determine the best algorithm and features that are crucial for predicting the future outcomes. This survey can help educational institutions assess themselves and find any gaps that need to be filled in order to fulfil their purpose and vision. Machine Learning (ML) approaches have been explored to solve the issues as higher education systems have grown in size
Effects of Automated Interventions in Programming Assignments: Evidence from a Field Experiment
A typical problem in MOOCs is the missing opportunity for course conductors
to individually support students in overcoming their problems and
misconceptions. This paper presents the results of automatically intervening on
struggling students during programming exercises and offering peer feedback and
tailored bonus exercises. To improve learning success, we do not want to
abolish instructionally desired trial and error but reduce extensive struggle
and demotivation. Therefore, we developed adaptive automatic just-in-time
interventions to encourage students to ask for help if they require
considerably more than average working time to solve an exercise. Additionally,
we offered students bonus exercises tailored for their individual weaknesses.
The approach was evaluated within a live course with over 5,000 active students
via a survey and metrics gathered alongside. Results show that we can increase
the call outs for help by up to 66% and lower the dwelling time until issuing
action. Learnings from the experiments can further be used to pinpoint course
material to be improved and tailor content to be audience specific.Comment: 10 page
- …