715 research outputs found
Recommended from our members
Models for Learning (Mod4L) Final Report: Representing Learning Designs
The Mod4L Models of Practice project is part of the JISC-funded Design for Learning Programme. It ran from 1 May – 31 December 2006. The philosophy underlying the project was that a general split is evident in the e-learning community between development of e-learning tools, services and standards, and research into how teachers can use these most effectively, and is impeding uptake of new tools and methods by teachers. To help overcome this barrier and bridge the gap, a need is felt for practitioner-focused resources which describe a range of learning designs and offer guidance on how these may be chosen and applied, how they can support effective practice in design for learning, and how they can support the development of effective tools, standards and systems with a learning design capability (see, for example, Griffiths and Blat 2005, JISC 2006). Practice models, it was suggested, were such a resource.
The aim of the project was to: develop a range of practice models that could be used by practitioners in real life contexts and have a high impact on improving teaching and learning practice.
We worked with two definitions of practice models. Practice models are:
1. generic approaches to the structuring and orchestration of learning activities. They express elements of pedagogic principle and allow practitioners to make informed choices (JISC 2006)
However, however effective a learning design may be, it can only be shared with others through a representation. The issue of representation of learning designs is, then, central to the concept of sharing and reuse at the heart of JISC’s Design for Learning programme. Thus practice models should be both representations of effective practice, and effective representations of practice. Hence we arrived at the project working definition of practice models as:
2. Common, but decontextualised, learning designs that are represented in a way that is usable by practitioners (teachers, managers, etc).(Mod4L working definition, Falconer & Littlejohn 2006).
A learning design is defined as the outcome of the process of designing, planning and orchestrating learning activities as part of a learning session or programme (JISC 2006).
Practice models have many potential uses: they describe a range of learning designs that are found to be effective, and offer guidance on their use; they support sharing, reuse and adaptation of learning designs by teachers, and also the development of tools, standards and systems for planning, editing and running the designs.
The project took a practitioner-centred approach, working in close collaboration with a focus group of 12 teachers recruited across a range of disciplines and from both FE and HE. Focus group members are listed in Appendix 1. Information was gathered from the focus group through two face to face workshops, and through their contributions to discussions on the project wiki. This was supplemented by an activity at a JISC pedagogy experts meeting in October 2006, and a part workshop at ALT-C in September 2006. The project interim report of August 2006 contained the outcomes of the first workshop (Falconer and Littlejohn, 2006).
The current report refines the discussion of issues of representing learning designs for sharing and reuse evidenced in the interim report and highlights problems with the concept of practice models (section 2), characterises the requirements teachers have of effective representations (section 3), evaluates a number of types of representation against these requirements (section 4), explores the more technically focused role of sequencing representations and controlled vocabularies (sections 5 & 6), documents some generic learning designs (section 8.2) and suggests ways forward for bridging the gap between teachers and developers (section 2.6).
All quotations are taken from the Mod4L wiki unless otherwise stated
An agent framework for learning systems
Personalized learning systems must allow learners to choose their learning goals and learning process. This paper describes a way for providing agent support that can assist learners to do this. The paper then proposes a framework of software agents made up of two parts. One are customizing agents that assist learners to select learning materials to satisfy learning objectives and set up a learning plan. The other are managing agents that help learners to follow a study program to progress through that material and dynamically change the process as needed. The paper describes a way to describe learning process that can be used by such agents and illustrates with a small prototype
Learners Thrive When Using Multifaceted Open Social Learner Models
This article explores open social learner modeling (OSLM)-a social extension of open learner modeling (OLM). A specific implementation of this approach is presented by which learners' self-direction and self-determination in a social e-learning context could be potentially promoted. Unlike previous work, the proposed approach, multifaceted OSLM, lets the system seamlessly and adaptively embed visualization of both a learner's own model and other learning peers' models into different parts of the learning content, for multiple axes of context, at any time during the learning process. It also demonstrates the advantages of visualizing both learners' performance and their contribution to a learning community. An experimental study shows that, contrary to previous research, the richness and complexity of this new approach positively affected the learning experience in terms of perceived effectiveness, efficiency, and satisfaction. This article is part of special issue on social media for learning
Automated Application Permissions Setting
Operating systems of computing devices include permission management features to grant software applications (apps) access to various hardware and software components. Permissions may be configured using permission sets that each specify different levels of access. A user can specify the level of access to an app by selecting a permission set. A conventional permission set either grants or restricts access to a component. Techniques are described that provide selective access to a component by automatically inferring fine-grained permissions from various user-specific and other available signals
Learners' self-assessment and metacognition when using an open learner model with drill down
Metacognition is ‘thinking on thinking’. It is important to educational practices for learners/teachers, and in activities such as formative-assessment and self-directed learning. The ability to perform metacognition is not innate and requires fostering, and self-assessment contributes to this. Literature suggests proven practices for promoting metacognitive opportunities and ongoing enquiry about how technology best supports these. This thesis considers an open learner model (OLM) with a drill-down approach as a method to investigate support for metacognition and self-assessment.
Measuring aspects of metacognition without unduly influencing it is challenging. Direct measures (e.g. learners ‘thinking-aloud’) could distort/disrupt/encourage/effect metacognition. The thesis develops methods to evaluate aspects of metacognition without directly affecting it, relevant to future learning-analytics research/OLM design. It proposes a technology specification/implementation for supporting metacognition research and highlights the relevance of using a drill-down approach.
Using measures that correspond to post-hoc learner accounts, this thesis identifies a baseline of student activity that is consistent with important regulation of cognition tasks and students’ specific interest in problems. Whilst this does not always influence self-assessment accuracy, students indicating their self-assessment ability can be used as a proxy measure to identify those who will improve. Evidence supports claims that OLMs remain relevant in metacognition research
Recommended from our members
Coding classroom dialogue: Methodological considerations for researchers
Systematic analysis or coding of classroom dialogue is useful for assessing the role of high-quality interaction in supporting learning. However, although coding is an immensely complex and cognitively demanding activity that has taxed researchers over decades, the methodological challenges are often not discussed or problematised in empirical reports. Accordingly, this paper aims to help researchers make sense of the challenges, strengths and practical applications of using systematic coding schemes for analysing classroom dialogue. It presents an in-depth analysis of the pros and cons of contrasting approaches and the key methodological considerations, including scope, grain size, reliability and validity. It goes on to provide a worked example, illustrating how one team tackled the challenges in adapting for a new research objective an earlier coding scheme developed for use across diverse contexts. Two original, theory-informed analytic tools created to study the relationship between dialogic teaching and student learning in English primary schools are shared and made available for others’ use or adaptation. The paper offers practical guidance for developing or adapting coding schemes for different research purposes. It highlights the need for further precision and critical attention to the ways in which scholars are investigating dialogic practices intended to support learning.ESR
A framework for the pedagogical evaluation of eLearning environments
In 1999 the authors proposed a pedagogical framework for the evaluation of VLEs that was grounded in both educational and organisational theory,(Britain and Liber, 1999). The report was driven by the lack of work in the field at the time examining how VLEs could enhance teaching and learning. In 1999 many institutions were evaluating VLEs with a view to making their first step into using Internet-based ICT in their teaching and so the report was written to help educators understand how the design of systems could facilitate or constrain their pedagogical use in different contexts.
By 2003, elearning had matured considerably. ICT developments to support teaching and learning were no longer predominantly confined to isolated projects within academic departments and learning technology support units, but instead formed a core part of institutional strategy and policy. Widespread uptake of VLEs within HEIs had been supplemented by work to join up institutional administrative systems with VLEs to form Managed Learning Environments (MLEs). At a national level, e-learning had become the subject of a variety of government sponsored strategic initiatives in support of the programme of widening participation in HE and FE and promoting e-learning as an approach to improving the quality of education provision and empowering learners.
This report updates the earlier JISC report entitled 'A Framework for the Pedagogical Evaluation of Virtual Learning Environments' (1999). That report can be found online at: http://www.jisc.ac.uk/uploaded_documents/jtap-041.doc
The structure of the report is as follows:
- Chapter one provides an overview of the current context of e-learning
- Chapter two presents the revised framework which elaborates and extends the model
- Chapter three presents a review of a selection of systems against the framewor
QCBA: Postoptimization of Quantitative Attributes in Classifiers based on Association Rules
The need to prediscretize numeric attributes before they can be used in
association rule learning is a source of inefficiencies in the resulting
classifier. This paper describes several new rule tuning steps aiming to
recover information lost in the discretization of numeric (quantitative)
attributes, and a new rule pruning strategy, which further reduces the size of
the classification models. We demonstrate the effectiveness of the proposed
methods on postoptimization of models generated by three state-of-the-art
association rule classification algorithms: Classification based on
Associations (Liu, 1998), Interpretable Decision Sets (Lakkaraju et al, 2016),
and Scalable Bayesian Rule Lists (Yang, 2017). Benchmarks on 22 datasets from
the UCI repository show that the postoptimized models are consistently smaller
-- typically by about 50% -- and have better classification performance on most
datasets
- …