1,979 research outputs found
Capturing "attrition intensifying" structural traits from didactic interaction sequences of MOOC learners
This work is an attempt to discover hidden structural configurations in
learning activity sequences of students in Massive Open Online Courses (MOOCs).
Leveraging combined representations of video clickstream interactions and forum
activities, we seek to fundamentally understand traits that are predictive of
decreasing engagement over time. Grounded in the interdisciplinary field of
network science, we follow a graph based approach to successfully extract
indicators of active and passive MOOC participation that reflect persistence
and regularity in the overall interaction footprint. Using these rich
educational semantics, we focus on the problem of predicting student attrition,
one of the major highlights of MOOC literature in the recent years. Our results
indicate an improvement over a baseline ngram based approach in capturing
"attrition intensifying" features from the learning activities that MOOC
learners engage in. Implications for some compelling future research are
discussed.Comment: "Shared Task" submission for EMNLP 2014 Workshop on Modeling Large
Scale Social Interaction in Massively Open Online Course
Massive Open Online Courses Temporal Profiling for Dropout Prediction
Massive Open Online Courses (MOOCs) are attracting the attention of people
all over the world. Regardless the platform, numbers of registrants for online
courses are impressive but in the same time, completion rates are
disappointing. Understanding the mechanisms of dropping out based on the
learner profile arises as a crucial task in MOOCs, since it will allow
intervening at the right moment in order to assist the learner in completing
the course. In this paper, the dropout behaviour of learners in a MOOC is
thoroughly studied by first extracting features that describe the behavior of
learners within the course and then by comparing three classifiers (Logistic
Regression, Random Forest and AdaBoost) in two tasks: predicting which users
will have dropped out by a certain week and predicting which users will drop
out on a specific week. The former has showed to be considerably easier, with
all three classifiers performing equally well. However, the accuracy for the
second task is lower, and Logistic Regression tends to perform slightly better
than the other two algorithms. We found that features that reflect an active
attitude of the user towards the MOOC, such as submitting their assignment,
posting on the Forum and filling their Profile, are strong indicators of
persistence.Comment: 8 pages, ICTAI1
Dropout Model Evaluation in MOOCs
The field of learning analytics needs to adopt a more rigorous approach for
predictive model evaluation that matches the complex practice of
model-building. In this work, we present a procedure to statistically test
hypotheses about model performance which goes beyond the state-of-the-practice
in the community to analyze both algorithms and feature extraction methods from
raw data. We apply this method to a series of algorithms and feature sets
derived from a large sample of Massive Open Online Courses (MOOCs). While a
complete comparison of all potential modeling approaches is beyond the scope of
this paper, we show that this approach reveals a large gap in dropout
prediction performance between forum-, assignment-, and clickstream-based
feature extraction methods, where the latter is significantly better than the
former two, which are in turn indistinguishable from one another. This work has
methodological implications for evaluating predictive or AI-based models of
student success, and practical implications for the design and targeting of
at-risk student models and interventions
- …