1,527 research outputs found

    Extensions to the SMIL multimedia language

    Get PDF
    The goal of this work has been to extend the Synchronized Multimedia Integration Language (SMIL) to study the capabilities and possibilities of declarative multimedia languages for the World Wide Web (Web). The work has involved design and implementation of several extensions to SMIL. A novel approach to include 3D audio in SMIL was designed and implemented. This involved extending the SMIL 2D spatial model with an extra dimension to support a 3D space. New audio elements and a listening point were positioned in the 3D space. The extension was designed to be modular so that it was possible to use it in conjunction with other XML languages, such as XHTML and Scalable Vector Graphics (SVG) language. Web forms are one of the key features in the Web, as they offer a way to send user data to a server. A similar feature is therefore desirable in SMIL, which currently lacks forms. The XForms language, due to its modular approach, was used to add this feature to SMIL. An evaluation of this integration was carried out as part of this work. Furthermore, the SMIL player was designed to play out dynamic SMIL documents, which can be modified at run-time and the result is immediately reflected in the presentation. Dynamic SMIL enables execution of scripts to modify the presentation. XML Events and ECMAScript were chosen to provide the scripting functionality. In addition, generic methods to extend SMIL were studied based on the previous extensions. These methods include ways to attach new input and output capabilities to SMIL. To experiment with the extensions, a Synchronized Multimedia Integration Language (SMIL) player was developed. The current final version can play out SMIL 2.0 Basic profile documents with a few additional SMIL modules, such as event timing, basic animations, and brush media modules. The player includes all above-mentioned extensions. The SMIL player has been designed to work within an XML browser called X-Smiles. X-Smiles is intended for various embedded devices, such as mobile phones, Personal Digital Assistants (PDA), and digital television set-top boxes. Currently, the browser supports XHTML, SMIL, and XForms, which are developed by the current research group. The browser also supports other XML languages developed by 3rd party open-source projects. The SMIL player can also be run as a standalone player without the browser. The standalone player is portable and has been run on a desktop PC, PDA, and digital television set-top box. The core of the SMIL player is platform-independent, only media renderers require platform-dependent implementation.reviewe

    Modelling Reactive Multimedia: Design and Authoring

    Get PDF
    Multimedia document authoring is a multifaceted activity, and authoring tools tend to concentrate on a restricted set of the activities involved in the creation of a multimedia artifact. In particular, a distinction may be drawn between the design and the implementation of a multimedia artifact. This paper presents a comparison of three different authoring paradigms, based on the common case study of a simple interactive animation. We present details of its implementation using the three different authoring tools, MCF, Fran and SMIL 2.0, and we discuss the conclusions that may be drawn from our comparison of the three approaches

    SMIL State: an architecture and implementation for adaptive time-based web applications

    Get PDF
    In this paper we examine adaptive time-based web applications (or presentations). These are interactive presentations where time dictates which parts of the application are presented (providing the major structuring paradigm), and that require interactivity and other dynamic adaptation. We investigate the current technologies available to create such presentations and their shortcomings, and suggest a mechanism for addressing these shortcomings. This mechanism, SMIL State, can be used to add user-defined state to declarative time-based languages such as SMIL or SVG animation, thereby enabling the author to create control flows that are difficult to realize within the temporal containment model of the host languages. In addition, SMIL State can be used as a bridging mechanism between languages, enabling easy integration of external components into the web application. Finally, SMIL State enables richer expressions for content control. This paper defines SMIL State in terms of an introductory example, followed by a detailed specification of the State model. Next, the implementation of this model is discussed. We conclude with a set of potential use cases, including dynamic content adaptation and delayed insertion of custom content such as advertisements. © 2009 Springer Science+Business Media, LLC

    Requirements for an Adaptive Multimedia Presentation System with Contextual Supplemental Support Media

    Get PDF
    Investigations into the requirements for a practical adaptive multimedia presentation system have led the writers to propose the use of a video segmentation process that provides contextual supplementary updates produced by users. Supplements consisting of tailored segments are dynamically inserted into previously stored material in response to questions from users. A proposal for the use of this technique is presented in the context of personalisation within a Virtual Learning Environment. During the investigation, a brief survey of advanced adaptive approaches revealed that adaptation may be enhanced by use of manually generated metadata, automated or semi-automated use of metadata by stored context dependent ontology hierarchies that describe the semantics of the learning domain. The use of neural networks or fuzzy logic filtering is a technique for future investigation. A prototype demonstrator is under construction

    Optimized mobile thin clients through a MPEG-4 BiFS semantic remote display framework

    Get PDF
    According to the thin client computing principle, the user interface is physically separated from the application logic. In practice only a viewer component is executed on the client device, rendering the display updates received from the distant application server and capturing the user interaction. Existing remote display frameworks are not optimized to encode the complex scenes of modern applications, which are composed of objects with very diverse graphical characteristics. In order to tackle this challenge, we propose to transfer to the client, in addition to the binary encoded objects, semantic information about the characteristics of each object. Through this semantic knowledge, the client is enabled to react autonomously on user input and does not have to wait for the display update from the server. Resulting in a reduction of the interaction latency and a mitigation of the bursty remote display traffic pattern, the presented framework is of particular interest in a wireless context, where the bandwidth is limited and expensive. In this paper, we describe a generic architecture of a semantic remote display framework. Furthermore, we have developed a prototype using the MPEG-4 Binary Format for Scenes to convey the semantic information to the client. We experimentally compare the bandwidth consumption of MPEG-4 BiFS with existing, non-semantic, remote display frameworks. In a text editing scenario, we realize an average reduction of 23% of the data peaks that are observed in remote display protocol traffic

    Automatic Multimedia Creation Enriched with Dynamic Conceptual Data

    Get PDF
    There is a growing gap between the multimedia production and the context centric multimedia services. The main problem is the under-exploitation of the content creation design. The idea is to support dynamic content generation adapted to the user or display profile. Our work is an implementation of a web platform for automatic generation of multimedia presentations based on SMIL (Synchronized Multimedia Integration Language) standard. The system is able to produce rich media with dynamic multimedia content retrieved automatically from different content databases matching the semantic context. For this purpose, we extend the standard interpretation of SMIL tags in order to accomplish a semantic translation of multimedia objects in database queries. This permits services to take benefit of production process to create customized content enhanced with real time information fed from databases. The described system has been successfully deployed to create advanced context centric weather forecasts

    The Use of SMIL: Multimedia Research Currently Applied on a Global Scale

    Get PDF
    This paper describes the current use of the multimedia standard SMIL. SMIL features that relate to active areas of multimedia research are discussed. SMIL current implementation in existing browsers is described. Examples from the Web of SMIL applications representing different types of multimedia are presented. These discussions together provide an overview of how SMIL currently addresses the needs of multimedia distributed on the Web

    Face modeling and animation language for MPEG-4 XMT framework

    Get PDF
    This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems

    Implementing adaptability in the standard reference model for intelligent multimedia presentation systems

    Get PDF
    This paper discusses the implementation of adaptability in environments that are based on the Standard Reference Model for Intelligent Multimedia Presentation Systems. This adaptability is explored in the context of style sheets, which are represented in such formats as DSSSL. The use of existing public standards and tools for this implementation of style sheet-based adaptability is described. The Berlage environment is presented, which integrates these standards and tools into a complete storage-to-presentation hypermedia environment. The integration of the SRM into the Berlage environment is introduced in this work. This integration illustrates the issues involved in implementing adaptability in the model

    Adding state to declarative languages to enable web applications

    Get PDF
    On the web, media tend to be encoded in declarative formats, which facilitate accessibility, reuse, and transformation. Web applications, on the other hand, are created with more procedural technology and do not enjoy these benefits. In this thesis we examine how this can be fixed. We examine a small part of the problem space, adaptive time based applications, and investigate how we can extend existing declarative languages to fa
    • …
    corecore