24 research outputs found

    The Recording of Interactive Media Streams Using a Common Framework

    Full text link
    The development of real-time transport protocols for the Internet has been a focus ofresearch for several years. Meanwhile, the Real-Time Transport Protocol (RTP) is a well accepted standard that is widely deployed for the transmission of video and audio streams. The RTP specification, combined with a companion RTP profile, covers common aspects of real-time transmissions of video and audio in various encodings. This enabled the development of RTP recorders which record and play back video and audio streams regardless of a specific media encoding. Interactive media streams with real-time characteristics are now gaining importance rapidly. Examples are the data streams of shared whiteboards, remote Java animations and distributed VRML worlds. In this paper we present a generalized recording service that enables the recording and playback of this new class of media. In analogy to video and audio streams we have defined an RTP profile covering common aspects of the interactive media class. We discuss design principles for this recording ser-vice and describe the key mechanisms that allow to randomly access recorded interactive media streams independent of a specific media type and encoding

    A General Framework and Communication Protocol for the Real-Time Transmission of Interactive Media

    Get PDF
    In this paper we present a general framework for the real-time transmission ofinteractive media, i.e. media involving user interaction. Examples of interactive media are shared whiteboards, Java animations and VRML worlds. By identifying and supporting the common aspects of this media class the framework allows the development of generic services for network sessions involving the transmission of interactive media. Examples are mechanisms for late join and session recording. The proposed framework is based on the Real-Time Transport Protocol (RTP) which is widely used in the Internet for the real-time transmission of audio and video. Using the experience gained through the framework for audio and video, our work consists of three important parts: the definition of a protocol profile, the instantiation of this profile for specific media, and the development of generic services. The profile captures those aspects that are common to the class of interactive media. A single medium must instantiate this profile by providing media-specific information in the form of a payload type definition. Based on the profile, generic services can be developed for all interactive media. In this paper we focus on the description of the profile for the real-time transmission of interactive media. We then present the main ideas behind a generic recording service. Finally we show how multi-user VRML and distributed interactive Java animations can instantiate the profile

    Teleteaching over Low-Bandwidth Network Channels

    Full text link
    Teleteaching has become an important application of the Internet and the MBone. Unfortunately the costs for the hardware, necessary to participate in remote lectures, are still prohibitively high and the degree of distributiveness of the implemented scenarios is very low. In the Interactive Home Learning project we plan to provide methods to participate in a Teleteaching lecture live from a PC at horne via a low-bandwidth connection (e.g. ISDN). This paper summarises technical aspects of this learning scenario and presents our approach, the fully Java-based Reflection and Scaling Tool jrst, which meets the requirements of an application layer multicast routing demon with a highly restrictive broadcasting policy and a dynamic tunnelling mechanism

    Lehrszenarien-übergreifende Erzeugung und Verwendung Multimedialer Teachware

    Full text link
    Das Erstellen multimedialer Lehrdokumente ist ein aufwendiger Prozeß. In der traditionellen Lehre werden für verschiedenen Lehrszenarien (Vorlesung, Nachbearbeitung durch den Studenten, etc.) oft unterschiedliche Dokumente erstellt, die denselben Stoff behandeln. Weiterhin entstehen während einer Vorlesung durch die Erklärungen des Dozenten implizit Dokumente, die auch in anderen Lehrszenarien sinnvoll eingesetzt werden können. In diesem Artikel wird beschrieben, wie die in Lehrveranstaltungen entstehenden Dokumente aufgezeichnet und auf einem Server bereitgestellt werden. Mit dem vorgestellten Verfahren können die Aufzeichnungen flexibel mit zusätzlichen multimedialen Inhalten zu einer Computer-Based-Training-Einheit verknüpft werden. Dadurch ist es möglich, Dokumente effektiv sowohl in synchronen als auch in asynchronen Lehrszenarien zu einzusetzen

    Video Conference as a tool for Higher Education

    Get PDF
    The book describes the activities of the consortium member institutions in the framework of the TEMPUS IV Joint Project ViCES - Video Conferencing Educational Services (144650-TEMPUS-2008-IT-JPGR). In order to provide the basis for the development of a distance learning environment based on video conferencing systems and develop a blended learning courses methodology, the TEMPUS Project VICES (2009-2012) was launched in 2009. This publication collects the conclusion of the project and it reports the main outcomes together with the approach followed by the different partners towards the achievement of the project's goal. The book includes several contributions focussed on specific topics related to videoconferencing services, namely how to enable such services in educational contexts so that, the installation and deployment of videoconferencing systems could be conceived an integral part of virtual open campuses

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Assessing the quality of audio and video components in desktop multimedia conferencing

    Get PDF
    This thesis seeks to address the HCI (Human-Computer Interaction) research problem of how to establish the level of audio and video quality that end users require to successfully perform tasks via networked desktop videoconferencing. There are currently no established HCI methods of assessing the perceived quality of audio and video delivered in desktop videoconferencing. The transport of real-time speech and video information across new digital networks causes novel and different degradations, problems and issues to those common in the traditional telecommunications areas (telephone and television). Traditional assessment methods involve the use of very short test samples, are traditionally conducted outside a task-based environment, and focus on whether a degradation is noticed or not. But these methods cannot help establish what audio-visual quality is required by users to perform tasks successfully with the minimum of user cost, in interactive conferencing environments. This thesis addresses this research gap by investigating and developing a battery of assessment methods for networked videoconferencing, suitable for use in both field trials and laboratory-based studies. The development and use of these new methods helps identify the most critical variables (and levels of these variables) that affect perceived quality, and means by which network designers and HCI practitioners can address these problems are suggested. The output of the thesis therefore contributes both methodological (i.e. new rating scales and data-gathering methods) and substantive (i.e. explicit knowledge about quality requirements for certain tasks) knowledge to the HCI and networking research communities on the subjective quality requirements of real-time interaction in networked videoconferencing environments. Exploratory research is carried out through an interleaved series of field trials and controlled studies, advancing substantive and methodological knowledge in an incremental fashion. Initial studies use the ITU-recommended assessment methods, but these are found to be unsuitable for assessing networked speech and video quality for a number of reasons. Therefore later studies investigate and establish a novel polar rating scale, which can be used both as a static rating scale and as a dynamic continuous slider. These and further developments of the methods in future lab- based and real conferencing environments will enable subjective quality requirements and guidelines for different videoconferencing tasks to be established

    Telepresence and Transgenic Art

    Get PDF
    corecore