852 research outputs found

    Declarative Integration of Interactive 3D Graphics into the World-Wide Web: Principles, Current Approaches, and Research Agenda

    Get PDF
    International audienceWith the advent of WebGL, plugin-free hardware-accelerated interactive 3D graphics has finally arrived in all major Web browsers. WebGL is an imperative solution that is tied to the functionality of rasterization APIs. Consequently, its usage requires a deeper understanding of the rasterization pipeline. In contrast to this stands a declarative approach with an abstract description of the 3D scene. We strongly believe that such approach is more suitable for the integration of 3D into HTML5 and related Web technologies, as those concepts are well-known by millions of Web developers and therefore crucial for the fast adoption of 3D on the Web. Hence, in this paper we explore the options for new declarative ways of incorporating 3D graphics directly into HTML to enable its use on any Web page. We present declarative 3D principles that guide the work of the Declarative 3D for the Web Architecture W3C Community Group and describe the current state of the fundamentals to this initiative. Finally, we draw an agenda for the next development stages of Declarative 3D for the Web

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Web-based multimodal graphs for visually impaired people

    Get PDF
    This paper describes the development and evaluation of Web-based multimodal graphs designed for visually impaired and blind people. The information in the graphs is conveyed to visually impaired people through haptic and audio channels. The motivation of this work is to address problems faced by visually impaired people in accessing graphical information on the Internet, particularly the common types of graphs for data visualization. In our work, line graphs, bar charts and pie charts are accessible through a force feedback device, the Logitech WingMan Force Feedback Mouse. Pre-recorded sound files are used to represent graph contents to users. In order to test the usability of the developed Web graphs, an evaluation was conducted with bar charts as the experimental platform. The results showed that the participants could successfully use the haptic and audio features to extract information from the Web graphs

    Can GUI implementation markup languages be used for modelling?

    Get PDF
    The current diversity of available devices and form factors increases the need for model-based techniques to support adapting applications from one device to another.Most work on user interface modelling is built around declarative markup languages. Markup languages play a relevant role, not only in the modelling of user interfaces, but also in their implementation. However, the languages used by each community (modellers/ developers) have, to a great extent evolved separately. This means that the step from concrete model to final interface becomes needlessly complicated, requiring either compilers or interpreters to bridge this gap. In this paper we compare a modelling language (UsiXML) with several markup implementation languages. We analyse if it is feasible to use the implementation languages as modelling languages.(undefined

    X3DOM AS CARRIER OF THE VIRTUAL HERITAGE

    Get PDF

    A graphics software architecture for high-end interactive TV terminals

    Get PDF
    This thesis proposes a graphics architecture for next-generation digital television receivers. The starting assumption is that in the future, a number of multimedia terminals will have access through a number of networks to a variety of content and services. One example of such a device is a media station capable of integrating different kinds of multimedia objects such as 2D/3D graphics and video, reacting to user interaction, and supporting the temporal dimension of applications. Some of the services intended for these devices include, for example, games and enhanced information over broadcasted video. First, this thesis provides an overview of the digital television environment, focusing on the limitations of current receivers and hints at future directions. In addition, this thesis compares different solutions from regional standardisation bodies such as DVB, CableLabs, and ARIB. It proposes the adoption of the most relevant initiative, GEM by DVB. Unfortunately, GEM software middleware only considers Java language as an authoring format, meaning that the declarative environment and advanced functionalities (e.g., 3D graphics support) remain to be standardised. Because in the future different user groups will have different demands with regard to television, this thesis identifies two major extensions to the GEM standard. First, it proposes a declarative environment for GEM that takes into account W3C standardisation efforts. This environment is divided into two configurations: one capable of rendering limited interactive applications such as information services, and another intended for more demanding applications, for example a distance learning portal that synchronises videos of lecturers and slides. Second, this thesis proposes to extend the procedural environment of GEM with 3D graphics support. The potential services of this new profile, High-End Interactive, include games and commercials. Then, based on the requirements the proposed profiles should meet, this thesis defines a graphics architecture model composed of five layers. The hardware abstraction layer is in charge of rendering the final graphics output. The graphical context is a cross-platform abstraction of the rendering region and provides graphics primitives (e.g., rectangles and images). The graphical environment provides the means to control different graphical contexts. The GUI toolkit is a set of ready-made user interface widgets and layout schemes. Finally, high-level languages are easy-to-use tools for developing simple services. The thesis concludes with a report of my experience implementing a digital television receiver based on the proposals described. In addition to testing the application of the proposed graphics architecture to the design and implementation of a next-generation digital television receiver, the implementation permits the analysis of the requirements of such receivers and of the services they can provide.reviewe

    FlexiXML : a portable user interface rendering engine for UsiXML

    Get PDF
    A considerable amount of effort in software development is dedicated to the user interaction layer.Given the complexity inherent to the development of this layer, it is important to be able to analyse the concepts and ideas being used in the development of a given user interface. This analysis should be performed as early as possible. Model- based user interface development provides a solution to this problem by providing developers with tools that enable both modeling, and reasoning about, user interfaces at different levels of abstraction. Of particular interest here, is the possibility of animating the models to generate actual user interfaces. This paper describes FlexiXML, a tool that performs the rendering and animation of user interfaces described in the UsiXML modeling language

    Extensions to the SMIL multimedia language

    Get PDF
    The goal of this work has been to extend the Synchronized Multimedia Integration Language (SMIL) to study the capabilities and possibilities of declarative multimedia languages for the World Wide Web (Web). The work has involved design and implementation of several extensions to SMIL. A novel approach to include 3D audio in SMIL was designed and implemented. This involved extending the SMIL 2D spatial model with an extra dimension to support a 3D space. New audio elements and a listening point were positioned in the 3D space. The extension was designed to be modular so that it was possible to use it in conjunction with other XML languages, such as XHTML and Scalable Vector Graphics (SVG) language. Web forms are one of the key features in the Web, as they offer a way to send user data to a server. A similar feature is therefore desirable in SMIL, which currently lacks forms. The XForms language, due to its modular approach, was used to add this feature to SMIL. An evaluation of this integration was carried out as part of this work. Furthermore, the SMIL player was designed to play out dynamic SMIL documents, which can be modified at run-time and the result is immediately reflected in the presentation. Dynamic SMIL enables execution of scripts to modify the presentation. XML Events and ECMAScript were chosen to provide the scripting functionality. In addition, generic methods to extend SMIL were studied based on the previous extensions. These methods include ways to attach new input and output capabilities to SMIL. To experiment with the extensions, a Synchronized Multimedia Integration Language (SMIL) player was developed. The current final version can play out SMIL 2.0 Basic profile documents with a few additional SMIL modules, such as event timing, basic animations, and brush media modules. The player includes all above-mentioned extensions. The SMIL player has been designed to work within an XML browser called X-Smiles. X-Smiles is intended for various embedded devices, such as mobile phones, Personal Digital Assistants (PDA), and digital television set-top boxes. Currently, the browser supports XHTML, SMIL, and XForms, which are developed by the current research group. The browser also supports other XML languages developed by 3rd party open-source projects. The SMIL player can also be run as a standalone player without the browser. The standalone player is portable and has been run on a desktop PC, PDA, and digital television set-top box. The core of the SMIL player is platform-independent, only media renderers require platform-dependent implementation.reviewe
    • …
    corecore