30 research outputs found

    A multidimensional evaluation framework for personal learning environments

    Get PDF
    Evaluating highly dynamic and heterogeneous Personal Learning Environments (PLEs) is extremely challenging. Components of PLEs are selected and configured by individual users based on their personal preferences, needs, and goals. Moreover, the systems usually evolve over time based on contextual opportunities and constraints. As such dynamic systems have no predefined configurations and user interfaces, traditional evaluation methods often fall short or are even inappropriate. Obviously, a host of factors influence the extent to which a PLE successfully supports a learner to achieve specific learning outcomes. We categorize such factors along four major dimensions: technological, organizational, psycho-pedagogical, and social. Each dimension is informed by relevant theoretical models (e.g., Information System Success Model, Community of Practice, self-regulated learning) and subsumes a set of metrics that can be assessed with a range of approaches. Among others, usability and user experience play an indispensable role in acceptance and diffusion of the innovative technologies exemplified by PLEs. Traditional quantitative and qualitative methods such as questionnaire and interview should be deployed alongside emergent ones such as learning analytics (e.g., context-aware metadata) and narrative-based methods. Crucial for maximal validity of the evaluation is the triangulation of empirical findings with multi-perspective (end-users, developers, and researchers), mixed-method (qualitative, quantitative) data sources. The framework utilizes a cyclic process to integrate findings across cases with a cross-case analysis in order to gain deeper insights into the intriguing questions of how and why PLEs work

    Worked Examples and Tutored Problem Solving:Redundant or Synergistic Forms of Support?

    No full text
    The current research investigates a combination of two instructional approaches, tutored problem solving and worked-examples. Tutored problem solving with automated tutors has proven to be an effective instructional method. Worked-out examples have been shown to be an effective complement to untutored problem solving, but it is largely unknown whether they are an effective complement to tutored problem solving. Further, while computer-based learning environments offer the possibility of adaptively transitioning from examples to problems while tailoring to an individual learner, the effectiveness of such machine-adapted example fading is largely unstudied. To address these research questions, one lab and one classroom experiment were conducted. Both studies compared a standard Cognitive Tutor with two example-enhanced Cognitive Tutors, in which the fading of worked-out examples occurred either fixed or adaptively. Results indicate that the adaptive fading of worked-out examples leads to higher transfer performance on delayed post-tests than the other two methods
    corecore