637 research outputs found

    Supporting Usability and Reusability Based on eLearning Standards

    Get PDF
    The IMS-QTI, and other related specifications have been developed to support the creation of reusable and pedagogically neutral assessment scenarios and content, as stated by the IMS Global Learning Consortium. In this paper we discuss how current specifications both constrain the design of assessment scenarios, and limit content reusability. We also suggest some solutions to overcome these limitations. The paper is based on our experience developing and testing an IMS QTI Lite compliant assessment authoring tool, QAed. It supports teacher centering, which is quite neglected when designing such tools. In the paper we also discuss how to make compatible standards support and user centering in eLearning applications and provide some recommendations for the design of the user interfaces

    Interactivity within IMS Learning Design and Question and Test Interoperability

    No full text
    We examine the integration of IMS Question and Test Interoperability (QTI) and IMS Learning Design (LD) in implementations of E-learning from both pedagogical and technological points of view. We propose the use of interactivity as a parameter to evaluate the quality of assessment and E-learning, and assess various cases of individual and group study for their interactivity, ease of coding, flexibility, and reusability. We conclude that presenting assessments using IMS QTI provides flexibility and reusability within an IMS LD Unit Of Learning (UOL) for individual study. For group study, however, the use of QTI items may involve coding difficulties if group members need to wait for their feedback until all students have attempted a question, and QTI items may not be able to be used at all if the QTI services are implemented within a service-oriented architecture

    Interoperability with CAA: does it work in practice?

    Get PDF
    IMS has been promising question and test interoperability (QTI) for a number of years. Reported advantages of interoperability include the avoidance of “lock in” to one proprietary system, the ability to integrate systems from different vendors, and the facilitation of an exchange of questions and tests between institutions. The QTI specification, while not yet an international standard for the exchange of questions, tests and results, now appears to be stable enough for vendors to have developed systems which implement such an exchange in a fairly sophisticated way. The costs to software companies of implementing QTI “compliance” in their existing CAA systems, however, are high. Allowing users to move their data to other systems may not seem to make commercial sense either. As awareness of the advantages of interoperability increases within education, software companies are realising that adding QTI import and export facilities to their products can be a selling point. A handful of vendors have signed up to the concept of interoperability and have taken part in the IMS QTI Working Group. Others state that their virtual learning environments or CAA systems are “conformant” with IMS QTI but do these assertions stand up when the packages are tested together? The CETIS Assessment Special Interest Group has been monitoring developments in this area for over a year and has carried out an analysis of tools which exploit the QTI specifications. This paper describes to what extent the tools genuinely interoperate and examines the likely benefits for users and future prospects for CAA interoperability

    Design Standardized Web-Components for e-Learning

    Get PDF
    In this paper a flexible approach to design LMS with QTI Ready component based on the e-Learning standards AICC and IMS QTI is described. This system and component permits a dynamic learning and assessment process. QTI Ready component can provide these facilities to other real world virtual learning management system

    What's in a name? - a new hierarchy for question types

    Get PDF
    One of these is the terminology that is used to identify question types. As computer assisted assessment develops and extends, new assessment systems are introduced. It is a competitive sector and for those commercial companies involved, a measure of uniqueness is advantageous. All too often this can result in an undue emphasis on finding ways of naming question types to produce the largest number. Close scrutiny reveals that many of these types are derived from the same basic structure with different formatting. The clear cut naming of the initial question types during the first few years of computer assisted assessment worked well but advances in the technology and innovative approaches to assessment are making this convention difficult to sustain. The work of the IMS QTI group (IMS QTI project 2002) is very valuable and the issue of question types is partly addressed by them. A new structure and naming convention for question types that can be implemented by all interested parties is needed urgently. There are two aspects to this. 1. A naming convention that would interest those involved in IMS QTI standards and build on the work already undertaken. (the technical sector) 2. A naming convention for the authors, users, academics and researchers interested in what question types are available. (the non-technical sector) The advantages of such a hierarchy would include • progress in interoperability • progress in the use of item banking • stronger focus on the aims of assessment • greater awareness of the true question types available This paper proposes such a hierarchy developed from a non-technical viewpoint but with a sound structure as a basis for discussion, development and to motivate interest in research in this area

    An evaluation of pedagogically informed parameterised questions for self assessment

    No full text
    Self-assessment is a crucial component of learning. Learners can learn by asking themselves questions and attempting to answer them. However, creating effective questions is time-consuming because it may require considerable resources and the skill of critical thinking. Questions need careful construction to accurately represent the intended learning outcome and the subject matter involved. There are very few systems currently available which generate questions automatically, and these are confined to specific domains. This paper presents a system for automatically generating questions from a competency framework, based on a sound pedagogical and technological approach. This makes it possible to guide learners in developing questions for themselves, and to provide authoring templates which speed the creation of new questions for self-assessment. This novel design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including level of cognitive ability and subject matter. The system generates a list of all the questions that are possible from a given learning outcome, which may then be used to test for understanding, and so could determine the degree to which learners actually acquire the desired knowledge. The way in which the system has been designed and evaluated is discussed, along with its educational benefits

    Model-Driven Analysis towards Interoperability of Assessments in LMS

    Get PDF
    In this article we are focusing on interoperability of two aspects of LMS systems - test question types and assessments as such. The proposed strategy is based on MDA, especially on the Platform Independent Models (PIM). On the higher level of abstraction (PIM) it is possible to find commonalities and differences of architectures of various systems and these are the basis for the common generalized model of the assessments. In the three steps methodology we are adding specificities of PIM models of candidate systems with various architecture – Moodle, Olat and Claroline. The correctness of the final common model (General PIM) is proved in the implemented system for exchange of tests between the existing systems

    Management of Assessment Resources in a Federated Repository of Educational Resources

    Get PDF
    Proocedings of: Fifth European Conference on Technology Enhanced Learning Sustaining TEL: From Innovation to Learning and Practice (EC-TEL 2010). Barcelona, 28 September-1 October, 2010.This article tries to shed some light over the management of assessment resources in a repository of educational resources from an outcome-based perspective. The approximation to this problem is based on the ICOPER Reference Model, as a model to capture e-learning data, services and processes, addressing an interoperability approach. To demonstrate this proposal, a prototype has been implemented. This article also describes the design and development of this prototype that accesses a repository of educational resources (the Open ICOPER Content Space - OICS), the main features of the prototype, the development environment and the evaluation that is being performed.This work was partially funded by the Best Practice Network ICOPER (Grant No. ECP-2007-EDU-417007), the Learn3 project, “Plan Nacional de I+D+I” TIN2008-05163/TSI, and the eMadrid network, S2009/TIC-1650, “Investigación y Desarrollo de tecnologías para el e-learning en la Comunidad de Madrid”.Publicad

    Interoperability Between ELearning Systems

    Get PDF
    Online assessments are an integral part of eLearning systems that enhance both distance and continuous education. Although over two-hundred and fifty eLearning applications exist, most educational institutions are trapped with a particular vendor primarily due to lack of in test-question sharing features. This paper highlights the evolution of eLearning systems whilst detailing the two most prominent objective test question standards, namely, QML and QTI. An analysis conducted amongst software houses which are involved in the development of eLearning systems confirms the fact that most applications make use of proprietary formats and clearly shows a lack of import and export options amongst other features
    • …
    corecore