580 research outputs found

    The Link vs. the Event: Activating and Deactivating Elements in Time-Based Hypermedia

    Get PDF
    Activation and deactivation of media items plays a fundamental role in the playing of multimedia and time-based hypermedia presentations. Activation and deactivation information thus has to be captured in an underlying document format. In this paper we show that a number of aspects of activation and deactivation information can be captured using both link structures and events in time-based hypermedia. In particular, we discuss how deactivation and activation can be specified, how the activations and deactivations can be initiated and potential (synchronization) relationships between the elements involved. We first introduce the notions of time-based scheduling and event-based scheduling and then present a short summary of linking. We discuss the similarities between event-based scheduling and linking. We describe a number of aspects of activation and deactivation that can be specified within a document. We then discuss how activation and deactivation information can be recorded in link structures and events

    User-centred design of flexible hypermedia for a mobile guide: Reflections on the hyperaudio experience

    Get PDF
    A user-centred design approach involves end-users from the very beginning. Considering users at the early stages compels designers to think in terms of utility and usability and helps develop the system on what is actually needed. This paper discusses the case of HyperAudio, a context-sensitive adaptive and mobile guide to museums developed in the late 90s. User requirements were collected via a survey to understand visitors’ profiles and visit styles in Natural Science museums. The knowledge acquired supported the specification of system requirements, helping defining user model, data structure and adaptive behaviour of the system. User requirements guided the design decisions on what could be implemented by using simple adaptable triggers and what instead needed more sophisticated adaptive techniques, a fundamental choice when all the computation must be done on a PDA. Graphical and interactive environments for developing and testing complex adaptive systems are discussed as a further step towards an iterative design that considers the user interaction a central point. The paper discusses how such an environment allows designers and developers to experiment with different system’s behaviours and to widely test it under realistic conditions by simulation of the actual context evolving over time. The understanding gained in HyperAudio is then considered in the perspective of the developments that followed that first experience: our findings seem still valid despite the passed time

    MOT meets AHA!

    Get PDF
    MOT (My Online Teacher) is an adaptive hypermedia system (AHS) web-authoring environment. MOT is now being further developed according to the LAOS five-layer adaptation model for adaptive hypermedia and adaptive web-material, containing a domain -, goal -, user -, adaptation – and presentation model. The adaptation itself follows the LAG three-layer granularity structure, figuring direct adaptation techniques and rules, an adaptation language and adaptation strategies. In this paper we shortly describe the theoretical basis of MOT, i.e., LAOS and LAG, and then give some information about the current state of MOT. The purpose of this paper is to show how we plan the design and development of MOT and the well-known system AHA! (Adaptive Hypermedia Architecture), developed at the Technical University of Eindhoven since 1996. We aim especially at the integration with AHA! 2.0. Although AHA! 2.0 represents a progress when compared to the previous versions, a lot of adaptive features that are described by the LAOS and the adaptation granulation model and that are being implemented into MOT are not yet (directly) available. So therefore AHA! can benefit from MOT. On the other hand, AHA! offers a running platform for the adaptation engine, which can benefit MOT in return

    Leveraging video annotations in video-based e-learning

    Get PDF
    The e-learning community has been producing and using video content for a long time, and in the last years, the advent of MOOCs greatly relied on video recordings of teacher courses. Video annotations are information pieces that can be anchored in the temporality of the video so as to sustain various processes ranging from active reading to rich media editing. In this position paper we study how video annotations can be used in an e-learning context - especially MOOCs - from the triple point of view of pedagogical processes, current technical platforms functionalities, and current challenges. Our analysis is that there is still plenty of room for leveraging video annotations in MOOCs beyond simple active reading, namely live annotation, performance annotation and annotation for assignment; and that new developments are needed to accompany this evolution.Comment: 7th International Conference on Computer Supported Education (CSEDU), Barcelone : Spain (2014

    Hierarchical visualization in a simulation-based educational multimedia web system

    Full text link
    This is an electronic version of the paper presented at the 4th International Conference on Enterprise Information Systems, held in Ciudad Real on 2002This paper presents a system that generates web documents (courses, presentations or articles) enriched with interactive simulations and other hypermedia elements. Simulations are described using an object oriented continuous simulation language called OOCSMP. This language is complemented by two higher language layers (SODA-1L and SODA-2L). SODA-1L describes pages or slides, while SODA-2L builds courses, articles or presentations. A compiler (C-OOL) has been programmed to generate Java applets for the simulation models and HTML pages for the document pages. The paper focus on some new capabilities added to OOCSMP to handle different graphic detail levels of the system being simulated. Different views are shown as cascade windows, whose multimedia elements can be arranged and synchronized with the simulation execution. The new capabilities have been tested by extending a previously developed course on electronics

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    Improving media fragment integration in emerging web formats

    Get PDF
    The media components integrated into multimedia presentations are typically entire files. At times the media component desired for integration, either as a navigation destination or as coordinate presentation, is a part of a file, or what we call a fragment. Basic media fragment integration has long been implemented in hypermedia systems, but not to the degree envisioned by hypermedia research. The current emergence of several XML-based formats is beginning to extend the possibilities for media fragment integration on a large scale. This paper presents a set of requirements for media fragment integration, describes how standards currently meet some of these requirements and proposes extensions to these standards for meeting remaining requirements

    A Hierarchical Petri Net Model for SMIL Documents

    Get PDF
    • …
    corecore