9 research outputs found

    Towards a multimedia formatting vocabulary

    Get PDF
    Time-based, media-centric Web presentations can be described declaratively in the XML world through the development of languages such as SMIL. It is difficult, however, to fully integrate them in a complete document transformation processing chain. In order to achieve the desired processing of data-driven, time-based, media-centric presentations, the text-flow based formatting vocabularies used by style languages such as XSL, CSS and DSSSL need to be extended. The paper presents a selection of use cases which are used to derive a list of requirements for a multimedia style and transformation formatting vocabulary. The boundaries of applicability of existing text-based formatting models for media-centric transformations are analyzed. The paper then discusses the advantages and disadvantages of a fully-fledged time-based multimedia formatting model. Finally, the discussion is illustrated by describing the key properties of the example multimedia formatting vocabulary currently implemented in the back-end of our Cuypers multimedia transformation engine

    Toward Specifying Multimedia Requirements Using a New Time Petri Net Model

    Get PDF
    In this paper, we define a model dedicated to the specification of multimedia applications called Pre-emptive Time Petri Nets with synchronizing transitions (STPTPN) as an extension of T-time Petri nets where time is associated with transitions. The model is proposed in the general purpose to model a large scale of multimedia requirements. Thus, resource requirement issues are discussed in this paper, and addressed in the model. To deal with, resources are modelled as special places using a new mechanism called “pre-emptor hyperarc” which lets a transition be “resource strongly-enabled”, “resource-violated” or “resource-violating”. Moreover, two additional mechanisms are considered: A time suspension mechanism uses inhibitor arcs associated with stopwatches and synchronization mechanisms allow the simultaneous firing of a set of transitions (called Rendezvous), according to different schemes. Compared to other existing models, our model is provided with an adapted semantic, designed to represent clearly and accurately time requirements, as well as the complex resource-pre-empting mechanisms that are observed in multimedia systems

    Extensions to the SMIL multimedia language

    Get PDF
    The goal of this work has been to extend the Synchronized Multimedia Integration Language (SMIL) to study the capabilities and possibilities of declarative multimedia languages for the World Wide Web (Web). The work has involved design and implementation of several extensions to SMIL. A novel approach to include 3D audio in SMIL was designed and implemented. This involved extending the SMIL 2D spatial model with an extra dimension to support a 3D space. New audio elements and a listening point were positioned in the 3D space. The extension was designed to be modular so that it was possible to use it in conjunction with other XML languages, such as XHTML and Scalable Vector Graphics (SVG) language. Web forms are one of the key features in the Web, as they offer a way to send user data to a server. A similar feature is therefore desirable in SMIL, which currently lacks forms. The XForms language, due to its modular approach, was used to add this feature to SMIL. An evaluation of this integration was carried out as part of this work. Furthermore, the SMIL player was designed to play out dynamic SMIL documents, which can be modified at run-time and the result is immediately reflected in the presentation. Dynamic SMIL enables execution of scripts to modify the presentation. XML Events and ECMAScript were chosen to provide the scripting functionality. In addition, generic methods to extend SMIL were studied based on the previous extensions. These methods include ways to attach new input and output capabilities to SMIL. To experiment with the extensions, a Synchronized Multimedia Integration Language (SMIL) player was developed. The current final version can play out SMIL 2.0 Basic profile documents with a few additional SMIL modules, such as event timing, basic animations, and brush media modules. The player includes all above-mentioned extensions. The SMIL player has been designed to work within an XML browser called X-Smiles. X-Smiles is intended for various embedded devices, such as mobile phones, Personal Digital Assistants (PDA), and digital television set-top boxes. Currently, the browser supports XHTML, SMIL, and XForms, which are developed by the current research group. The browser also supports other XML languages developed by 3rd party open-source projects. The SMIL player can also be run as a standalone player without the browser. The standalone player is portable and has been run on a desktop PC, PDA, and digital television set-top box. The core of the SMIL player is platform-independent, only media renderers require platform-dependent implementation.reviewe

    A component framework for personalized multimedia applications

    Get PDF
    Eine praktikable UnterstĂŒtzung fĂŒr eine dynamische Erstellung von personalisierten Multimedia-PrĂ€sentationen bieten bisher weder industrielle Lösungen noch ForschungsansĂ€tze. Mit dem Software-technischen Ansatz des MM4U-Frameworks („MultiMedia For You“) wird erstmals eine generische und zugleich praktikable UnterstĂŒtzung fĂŒr den dynamischen Erstellungsprozess bereitgestellt. Das Ziel des MM4U-Frameworks ist es den Anwendungsentwicklern eine umfangreiche und anwendungsunabhĂ€ngige UnterstĂŒtzung zur Erstellung von personalisierten Multimedia-Inhalten anzubieten und damit den Entwicklungsprozess solcher Anwendungen erheblich zu erleichtern. Um das Ziel eines Software-Frameworks zur generischen UnterstĂŒtzung der Entwicklung von personalisierten Multimedia-Anwendungen zu erreichen, stellt sich die Frage nach einer geeigneten Software-technischen UnterstĂŒtzung zur Entwicklung eines solchen Frameworks. Seit der EinfĂŒhrung von objektorientierten Frameworks, ist heute die Entwicklung immer noch aufwendig und schwierig. Um die Entwicklungsrisiken zu reduzieren, sind geeignete Vorgehensmodelle und Entwicklungsmethoden erstellt worden. Mit der Komponenten-Technologie sind auch so genannte Komponenten-Frameworks entstanden. Im Gegensatz zu objekt-orientierten Frameworks fehlt derzeit jedoch ein geeignetes Vorgehensmodell fĂŒr Komponenten-Frameworks. Um den Entwicklungsprozess von Komponenten-Frameworks zu verbessern ist mit ProMoCF („Process Model for Component Frameworks“) ein neuartiger Ansatz entwickelt worden. Hierbei handelt es sich um ein leichtgewichtiges Vorgehensmodell und eine Entwicklungsmethodik fĂŒr Komponenten-Frameworks. Das Vorgehensmodell wurde unter gegenseitigem Nutzen mit der Entwicklung des MM4U-Frameworks erstellt. Das MM4U-Framework stellt keine Neuerfindung der Adaption von Multimedia-Inhalten dar, sondern zielt auf die Vereinigung und Einbettung existierender ForschungsansĂ€tze und Lösungen im Umfeld der Multimedia-Personalisierung. Mit so einem Framework an der Hand können Anwendungsentwickler erstmals effizient und einfach eine dynamische Erstellung ihrer personalisierten Multimedia-Inhalte realisieren

    Annotierte interaktive nichtlineare Videos - Software Suite, Download- und Cache-Management

    Get PDF
    Modern Web technology makes the dream of fully interactive and enriched video come true. Nowadays it is possible to organize videos in a non-linear way playing in a sequence unknown in advance. Furthermore, additional information can be added to the video, ranging from short descriptions to animated images and further videos. This affords an easy and efficient to use authoring tool which is capable of the management of the single media objects, as well as a clear arrangement of the links between the parts. Tools of this kind can be found rarely and do mostly not provide the full range of needed functions. While providing an interactive experience to the viewer in the Web player, parallel plot sequences and additional information lead to an increased download volume. This may cause pauses during playback while elements have to be downloaded which are displayed with the video. A good quality of experience for these videos with small waiting times and a playback without interruptions is desired. This work presents the SIVA Suite to create the previously described annotated interactive non-linear videos. We propose a video model for interactivity, non-linearity, and annotations, which is implemented in an XML format, an authoring tool, and a player. Video is the main medium, whereby different scenes are linked to a scene graph. Time controlled additional content called annotations, like text, images, audio files, or videos, is added to the scenes. The user is able to navigate in the scene graph by selecting a button at a button panel. Furthermore, other navigational elements like a table of contents or a keyword search are provided. Besides the SIVA Suite, this thesis presents algorithms and strategies for download and cache management to provide a good quality of experience while watching the annotated interactive non-linear videos. Therefor, we implemented a standard-independent player framework. Integrated into a simulation environment, the framework allows to evaluate algorithms and strategies for the calculation of start-up times, and the selection of elements to pre-fetch into and delete from the cache. Their interaction during the playback of non-linear video contents can be analyzed. The algorithms and strategies can be used to minimize interruptions in the video flow after user interactions. Our extensive evaluation showed that our techniques result in faster start-up times and lesser interruptions in the video flow than those of other players. Knowledge of the structure of an interactive non-linear video can be used to minimize the start-up time at the beginning of a video while minimizing an increase in the overall download volume.Moderne Web-Technologien lassen den Traum von voll interaktiven und bereicherten Videos wahr werden. Heutzutage ist es möglich, Videos in nicht-linearer Art und Weise zu organisieren, welche dann in einer vorher unbekannten Reihenfolge abgespielt werden können. Weiterhin können den Videos Zusatzinformationen in Form von kurzen Beschreibungen ĂŒber animierte Bilder bis hin zu weiteren Videos hinzugefĂŒgt werden. Dies erfordert ein einfach und effizient zu bedienendes Autorenwerkzeug, das in der Lage ist, sowohl einzelne Medien-Objekte zu verwalten, als auch die Verbindungen zwischen den einzelnen Teilen klar darzustellen. Tools dieser Art sind selten und bieten meist nicht den vollen benötigten Funktionsumfang. WĂ€hrend dem Betrachter dieses interaktive Erlebnis im Web Player zur VerfĂŒgung gestellt wird, fĂŒhren parallele HandlungsstrĂ€nge und zusĂ€tzliche Inhalte zu einem erhöhten Download-Volumen. Dies kann zu Pausen wĂ€hrend der Wiedergabe fĂŒhren, in denen Elemente vom Server geladen werden mĂŒssen, welche mit dem Video angezeigt werden sollen. Ein gutes Benutzungserlebnis fĂŒr solche Videos kann durch geringe Wartezeiten und eine unterbrechungsfreie Wiedergabe erreicht werden. Diese Arbeit stellt die SIVA Suite vor, mit der die zuvor beschriebenen annotierten interaktiven nicht-linearen Videos erstellt werden können. Wir bilden InteraktivitĂ€t, NichtlinearitĂ€t und Annotationen in einem Video-Model ab. Dieses wird in unserem XML-Format, Autorentool und Player umgesetzt. Als Leitmedium werden hierbei Videos verwendet, welche aufgeteilt in Szenen zu einer Graphstruktur zusammengefĂŒgt werden können. Zeitlich gesteuerte zusĂ€tzliche Inhalte, sogenannte Annotationen, wie Texte, Bilder, Audio-Dateien und Videos, werden den Szenen hinzugefĂŒgt. Der Betrachter kann im Szenengraph navigieren, indem er in einem bereitgestellten Button-Panel eine Nachfolgeszene auswĂ€hlt. Andere Navigationselemente sind ein Inhaltsverzeichnis sowie eine Suchfunktion. Neben der SIVA Suite beschreibt diese Arbeit Algorithmen und Strategien fĂŒr Download und Cache Management, um eine gute Nutzungserfahrung wĂ€hrend der Betrachtung der annotierten interaktiven nicht-linearen Videos zu bieten. Ein Webstandard-unabhĂ€ngiges Playerframework erlaubt es, das Zusammenspiel von Algorithmen und Strategien zu evaluieren, welche fĂŒr die Berechnung der Start-Zeitpunkte fĂŒr die Wiedergabe, sowie die Auswahl von vorauszuladenden sowie zu löschenden Elemente verwendet werden. Ziel ist es, Unterbrechungen zu minimieren, wenn der Ablauf des Videos durch Benutzerinteraktion beeinflusst wird. Unsere umfassende Evaluation zeigte, dass es möglich ist, kĂŒrzere Startup-Zeiten und weniger Unterbrechungen mit unseren Strategien zu erreichen, als bei der Verwendung der Strategien anderer Player. Die Kenntnis der Struktur des interaktiven nicht-linearen Videos kann dazu verwendet werden, die Startzeit am Anfang der Szenen zu minimieren, wĂ€hrend das Download-Volumen nicht erhöht wird

    Synchronization modeling and its application for SMIL2.0 presentations

    No full text
    [[abstract]]A novel synchronization model namely Extended Real-Time Synchronization Model (E-RTSM) for modeling SMIL2.0 temporal behaviors is proposed in this paper. E-RTSM deals with event-based/non-deterministic synchronization as well as schedule-based synchronization in SMIL2.0. Converting of the temporal relationship of a SMIL2.0 document to E-RTSM is presented. Moreover, design of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations is also proposed in the paper. The data-retrieving engine estimates the worst-case playback time of each object at the parsing stage and applying an error compensation mechanism at run-time to adjust the estimated playback time as well as the schedule of the fetching requests for data retrieval. Performance measurements from the real implementation of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations have demonstrated the efficiency of the proposed technique. (C) 2006 Elsevier Inc. All rights reserved.[[note]]SC

    Synchronization modeling and its application for SMIL2.0 presentations

    No full text
    [[abstract]]A novel synchronization model namely Extended Real-Time Synchronization Model (E-RTSM) for modeling SMIL2.0 temporal behaviors is proposed in this paper. E-RTSM deals with event-based/non-deterministic synchronization as well as schedule-based synchronization in SMIL2.0. Converting of the temporal relationship of a SMIL2.0 document to E-RTSM is presented. Moreover, design of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations is also proposed in the paper. The data-retrieving engine estimates the worst-case playback time of each object at the parsing stage and applying an error compensation mechanism at run-time to adjust the estimated playback time as well as the schedule of the fetching requests for data retrieval. Performance measurements from the real implementation of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations have demonstrated the efficiency of the proposed technique. (C) 2006 Elsevier Inc. All rights reserved.[[note]]SC

    Synchronization modeling and its application for SMIL2.0 presentations

    No full text
    [[abstract]]A novel synchronization model namely Extended Real-Time Synchronization Model (E-RTSM) for modeling SMIL2.0 temporal behaviors is proposed in this paper. E-RTSM deals with event-based/non-deterministic synchronization as well as schedule-based synchronization in SMIL2.0. Converting of the temporal relationship of a SMIL2.0 document to E-RTSM is presented. Moreover, design of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations is also proposed in the paper. The data-retrieving engine estimates the worst-case playback time of each object at the parsing stage and applying an error compensation mechanism at run-time to adjust the estimated playback time as well as the schedule of the fetching requests for data retrieval. Performance measurements from the real implementation of the E-RTSM-based data-retrieving engine for SMIL2.0 presentations have demonstrated the efficiency of the proposed technique. (C) 2006 Elsevier Inc. All rights reserved.[[note]]SC
    corecore