5,786 research outputs found

    Designing for change: mash-up personal learning environments

    Get PDF
    Institutions for formal education and most work places are equipped today with at least some kind of tools that bring together people and content artefacts in learning activities to support them in constructing and processing information and knowledge. For almost half a century, science and practice have been discussing models on how to bring personalisation through digital means to these environments. Learning environments and their construction as well as maintenance makes up the most crucial part of the learning process and the desired learning outcomes and theories should take this into account. Instruction itself as the predominant paradigm has to step down. The learning environment is an (if not 'theĂŻÂżÂœ) important outcome of a learning process, not just a stage to perform a 'learning play'. For these good reasons, we therefore consider instructional design theories to be flawed. In this article we first clarify key concepts and assumptions for personalised learning environments. Afterwards, we summarise our critique on the contemporary models for personalised adaptive learning. Subsequently, we propose our alternative, i.e. the concept of a mash-up personal learning environment that provides adaptation mechanisms for learning environment construction and maintenance. The web application mash-up solution allows learners to reuse existing (web-based) tools plus services. Our alternative, LISL is a design language model for creating, managing, maintaining, and learning about learning environment design; it is complemented by a proof of concept, the MUPPLE platform. We demonstrate this approach with a prototypical implementation and a – we think – comprehensible example. Finally, we round up the article with a discussion on possible extensions of this new model and open problems

    Sorting Through and Sorting Out: The State of Content Sharing in the E-Learning

    Get PDF
    On 22-24 September 2002, a group of 22 education and information technology specialists gathered on the campus of the University of California at Irvine (UCI), for a symposium on the state of educational "content sharing." (See participant list.) The meeting was sponsored by the William and Flora Hewlett Foundation Education Program and the UCI Distance Learning Center. This paper summarizes the themes that emerged from that gathering. Most papers can be characterized as collaborative, but this one is particularly deserving of that adjective. The presentation here is an attempt to synthesize the ideas of all the participants, expressed in numerous conversational and written exchanges pre-, during and post-meeting. While every effort has been made to present the range of views, surely not all participants would agree with the emphases and interpretations herein.This report includes a hyper-linked bibliography and footnotes for additional web-based material on e-learning topics. Links are provided for the reader's convenience only, and represent neither an endorsement nor a guarantee of the accuracy of the content of the associated sites. Comments and questions about this document are welcomed, however, and should be directed to the author or the meeting sponsors

    Conference Reports

    Get PDF

    Computer‐based learning in psychology using interactive laboratories

    Get PDF
    Traditional approaches to computer‐based learning often focus on the delivery of information. Such applications usually provide large stores of information which can be accessed in a wide variety of ways. Typical access facilities provided within such applications include Boolean search engines and hypermedia (non‐linear) browsing. These types of approach often centre on providing human‐computer dialogues which are relatively low on interaction. The interactive‐laboratory approach, however, aims to limit the quantity of information presented, and instead to provide a highly interactive learning environment. In the field of psychology, users are able interactively to design and deliver a broad range of psychological experiments. This paper details the approach, and describes how it can be used to teach psychology within a university environment. The way in which its effectiveness as a learning tool can be evaluated is also discussed

    Iowa Public Television’s Planning Targets 2011-2014

    Get PDF
    Agency Performance Plan, Iowa Public Televisio

    Generating Levels That Teach Mechanics

    Get PDF
    The automatic generation of game tutorials is a challenging AI problem. While it is possible to generate annotations and instructions that explain to the player how the game is played, this paper focuses on generating a gameplay experience that introduces the player to a game mechanic. It evolves small levels for the Mario AI Framework that can only be beaten by an agent that knows how to perform specific actions in the game. It uses variations of a perfect A* agent that are limited in various ways, such as not being able to jump high or see enemies, to test how failing to do certain actions can stop the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International Workshop on Procedural Content Generation (PCG2018

    Quality in MOOCs: Surveying the Terrain

    Get PDF
    The purpose of this review is to identify quality measures and to highlight some of the tensions surrounding notions of quality, as well as the need for new ways of thinking about and approaching quality in MOOCs. It draws on the literature on both MOOCs and quality in education more generally in order to provide a framework for thinking about quality and the different variables and questions that must be considered when conceptualising quality in MOOCs. The review adopts a relativist approach, positioning quality as a measure for a specific purpose. The review draws upon Biggs’s (1993) 3P model to explore notions and dimensions of quality in relation to MOOCs — presage, process and product variables — which correspond to an input–environment–output model. The review brings together literature examining how quality should be interpreted and assessed in MOOCs at a more general and theoretical level, as well as empirical research studies that explore how these ideas about quality can be operationalised, including the measures and instruments that can be employed. What emerges from the literature are the complexities involved in interpreting and measuring quality in MOOCs and the importance of both context and perspective to discussions of quality
    • 

    corecore