thesis

Users’ trust in open learner models

Abstract

This thesis is to investigate learner trust in an open learner model. Issues of trust become more important in an open learner model (OLM) because the model is available for learners to inspect and this may increase their perceptions of how a system evaluates their knowledge and updates the model. It is important to provide learners with a trustworthy environment because it can engage them to continue to use the system. In this thesis we investigate learner trust in two main perspectives: from the perspective of the system as a whole and from the perspective of OLM features. From the perspective of the system as a whole, we investigate the extent to which learners trust and accept the OLM system on their first use, the extent to which learners continue using the OLM optionally after their initial use, and the extent to which learner trust and accept the OLM after long term of use. From the perspective of OLM features in the OLM environment, we investigate learner trust based on most common features: (i) complexity of model presentation; (ii) level of learner control over the model; (iii) the facility to view peer models and release one's own model to peers. Learners appear to have a different level of trust in the OLM. Learners trust the system more in the short period of time. Learners also trust the different view of model presentation and the different level of learner control in OLM. In terms of peer models, the named peer model is trusted more than the anonymous model. Based on the findings, a set of requirements is established to help the designer in OLM to design a more trustable OLM

    Similar works