165 research outputs found

    Mobile and web tools for participative learning

    Get PDF
    Dissertação apresentada na Faculdade de CiĂȘncias e Tecnologia da Universidade Nova de Lisboa, para a obtenção do grau de Mestre em Engenharia InformĂĄticaThe combination of different media formats has been a crucial aspect on teaching and learning processes. The recent developments of multimedia technologies over the Internet and using mobile devices can improve the communication between professors and students, and allow students to study anywhere and anytime, allowing each student progress at its own pace. The usage of these new platforms and the increase of multimedia sharing applied to educational environments allow a more participative learning, and make the study of interfaces a relevant aspect of existing multimedia learning systems. The work done in this dissertation explores interfaces and tools for participative learning,using multimedia educational systems over Internet broadband and mobile devices. In this work, aWeb-based learning system was developed, which enables to store, transmit, search and share the contents of courses captured in video and its extension to support Tablet PCs. The Web system, developed as part of the VideoStore project, explores video interfaces and video annotations, which encourage the participative work. The usage of Tablet PCs, through the mEmLearn project, has the aim to encourage the participative work, allowing the students to augment the course materials and to share them with other students or instructors

    Bi & tri dimensional scene description and composition in the MPEG-4 standard

    Get PDF
    MPEG-4 is a new ISO/IEC standard being developed by MPEG (Moving Picture Experts Group). The standard is to be released in November 1998 and version 1 will be an International Standard in January 1999 The MPEG-4 standard addresses the new demands that arise in a world in which more and more audio-visual material is exchanged in digital form MPEG-4 addresses the coding of objects of various types. Not only traditional video and audio frames, but also natural video and audio objects as well as textures, text, 2- and 3-dimensional graphic primitives, and synthetic music and sound effects. Using MPEG-4 to reconstruct an audio-visual scene at a terminal, it is hence no longer sufficient to encode the raw audio-visual data and transmit it, as MPEG-2 does m order to synchronize video and audio. In MPEG-4, all objects are multiplexed together at the encoder and transported to the terminal Once de-multiplexed, these objects are composed at the terminal to construct and present to the end user a meaningful audio-visual scene. The placement of these elementary audio-visual objects in space and time is described in the scene description of a scene. While the action of putting these objects together in the same representation space is the composition of audio-visual objects. My research was concerned with the scene description and composition of the audio-visual objects that are defined in an audio-visual scene Scene descriptions are coded independently irom sticams related to primitive audio-visual objects. The set of parameters belonging to the scene description are differentiated from the parameters that are used to improve the coding efficiency of an object. While the independent coding of different objects may achieve a higher compression rate, it also brings the ability to manipulate content at the terminal. This allows the modification of the scene description parameters without having to decode the primitive audio-visual objects themselves. This approach allows the development of a syntax that describes the spatio-temporal relationships of audio-visual scene objects. The behaviours of objects and their response to user inputs can thus also be represented in the scene description, allowing richer audio-visual content to be delivered as an MPEG-4 stream

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    Best Practices for Cataloging Streaming Media Using RDA and MARC21

    Get PDF
    This document is intended to assist catalogers in creating records for streaming media according to instructions within Resource Description and Access (RDA), the successor to the Anglo-American Cataloguing Rules (AACR2). Like the original Best Practices for Cataloging Streaming Media, made available in 2008, it covers both streaming video and audio, including those that are born digital, as well as those that are created from an existing resource in another format, such as a video issued on DVD or videocassette. Its main focus is on resources that are “streaming” over the Internet in real-time, rather than resources that are not (e.g., video on CD-ROM or DVD-ROM, MP3 files on compact disc). In addition, it includes some examples of online video and audio files that can be downloaded in their entirety to one’s local computer

    Best Practices for Cataloging Streaming Media Using RDA and MARC21

    Get PDF
    This document is intended to assist catalogers in creating records for streaming media according to instructions within Resource Description and Access (RDA), the successor to the Anglo-American Cataloguing Rules (AACR2)

    Controlling Virtual Humans Using PDAs

    Get PDF
    The new breed of Personal Digital Assistants (PDA) and mobile phones have enough computing power to display 3D graphics. These new mobile devices (handhelds) have some other interesting communication and interaction possibilities as well. In this paper we explore the potential applications of 3D virtual humans inside mobile devices and the use of such handhelds as control interfaces to drive the virtual humans and navigate through their virtual environments
    • 

    corecore