Model based learning uses models in order to generate educational resources and adapt learning paths to learners and their context. Many domain models are published on the Web through linked data, thus providing a collective knowledge base that can be reused in the educational domain. However, for these models to be usable in an educational context, it should be possible to predict the learning context in which they can be used. A typical indicator of the usability of learning objects is their difficulty. Predicting the difficulty of an assessment item depends both on the construct, i.e., what is assessed, and on the form of the item. In this paper, we present several experiments that can support the prediction of the difficulty of an assessment item generated from a linked data source and the difficulty of the underlying item construct. We analyze the results of a test carried out with choice items (such as multiple choice questions), together with a Web mining approach, in order to provide indicators that the factual knowledge is common knowledge or expert knowledge in a particular population. Our objective is to annotate the semantic models and increase their reusability in an educational context
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.