7 research outputs found

    Declarative Integration of Interactive 3D Graphics into the World-Wide Web: Principles, Current Approaches, and Research Agenda

    Get PDF
    International audienceWith the advent of WebGL, plugin-free hardware-accelerated interactive 3D graphics has finally arrived in all major Web browsers. WebGL is an imperative solution that is tied to the functionality of rasterization APIs. Consequently, its usage requires a deeper understanding of the rasterization pipeline. In contrast to this stands a declarative approach with an abstract description of the 3D scene. We strongly believe that such approach is more suitable for the integration of 3D into HTML5 and related Web technologies, as those concepts are well-known by millions of Web developers and therefore crucial for the fast adoption of 3D on the Web. Hence, in this paper we explore the options for new declarative ways of incorporating 3D graphics directly into HTML to enable its use on any Web page. We present declarative 3D principles that guide the work of the Declarative 3D for the Web Architecture W3C Community Group and describe the current state of the fundamentals to this initiative. Finally, we draw an agenda for the next development stages of Declarative 3D for the Web

    Adaptivity of 3D web content in web-based virtual museums : a quality of service and quality of experience perspective

    Get PDF
    The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections' information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms. This means that different levels of fidelity of media may be appropriate. Therefore, servers hosting those media types need to adapt to the capabilities of a wide range of networks and devices. To achieve this, the research contributes the design and implementation of Hannibal, an adaptive QoS & QoE-aware engine that allows Web-Based Virtual Museums to deliver the best possible user experience across those platforms. In order to ensure effective adaptivity of 3D content, this research furthers the understanding of the 3D web in terms of Quality of Service (QoS) through empirical investigations studying how 3D Web components perform and what are their bottlenecks and in terms of QoE studying the subjective perception of fidelity of 3D Digital Heritage artefacts. Results of these experiments lead to the design and implementation of Hannibal

    Multi-View Web Interfaces in Augmented Reality

    Get PDF
    The emergence of augmented reality (AR) is reshaping how people can observe and interact with their physical world and digital content. Virtual instructions provided by see-through AR can greatly enhance the efficiency and accuracy of physical tasks, but the cost of content authoring in previous research calls for more utilization of legacy information in AR. Web information is a great source hosting a wide range of legacy and instructional resources, yet current web browsing experience in AR headsets has not exploited the advantage of 3D immersive space mixing the real and virtual environments. Instead of creating new AR content or transforming from legacy resources, this research investigates how to better present web interfaces in AR headsets, especially in a physical task instruction context. A new approach multi-view AR web interfaces is proposed, which suggests separating web components into multiple panels that can be freely arranged in the user's surrounding 3D space. The separation and arrangement would allow more flexible combination of web content from multiple sources and with other AR applications in the user's field of view. This thesis presents a remote and self-guided elicitation user study with 15 participants that derives layout arrangement preferences of the proposed multi-view interfaces. The study uses a VR system developed to simulate three scenarios of performing real-world tasks instructed by multi-view AR web content involving different types of media. The study analyzes how users arrange such web interfaces, and the system also simulates various physical environments and general AR applications to investigate their impact on the virtual interface arrangement. According to participant survey responses and interface arrangement data, the study identifies patterns in interface layout, grouping relationships between interfaces, physical environment constraints, and relationships between web interfaces and general applications. Then five implementation strategies are suggested based on the design preference findings

    Architectures for ubiquitous 3D on heterogeneous computing platforms

    Get PDF
    Today, a wide scope for 3D graphics applications exists, including domains such as scientific visualization, 3D-enabled web pages, and entertainment. At the same time, the devices and platforms that run and display the applications are more heterogeneous than ever. Display environments range from mobile devices to desktop systems and ultimately to distributed displays that facilitate collaborative interaction. While the capability of the client devices may vary considerably, the visualization experiences running on them should be consistent. The field of application should dictate how and on what devices users access the application, not the technical requirements to realize the 3D output. The goal of this thesis is to examine the diverse challenges involved in providing consistent and scalable visualization experiences to heterogeneous computing platforms and display setups. While we could not address the myriad of possible use cases, we developed a comprehensive set of rendering architectures in the major domains of scientific and medical visualization, web-based 3D applications, and movie virtual production. To provide the required service quality, performance, and scalability for different client devices and displays, our architectures focus on the efficient utilization and combination of the available client, server, and network resources. We present innovative solutions that incorporate methods for hybrid and distributed rendering as well as means to manage data sets and stream rendering results. We establish the browser as a promising platform for accessible and portable visualization services. We collaborated with experts from the medical field and the movie industry to evaluate the usability of our technology in real-world scenarios. The presented architectures achieve a wide coverage of display and rendering setups and at the same time share major components and concepts. Thus, they build a strong foundation for a unified system that supports a variety of use cases.Heutzutage existiert ein großer Anwendungsbereich für 3D-Grafikapplikationen wie wissenschaftliche Visualisierungen, 3D-Inhalte in Webseiten, und Unterhaltungssoftware. Gleichzeitig sind die Geräte und Plattformen, welche die Anwendungen ausführen und anzeigen, heterogener als je zuvor. Anzeigegeräte reichen von mobilen Geräten zu Desktop-Systemen bis hin zu verteilten Bildschirmumgebungen, die eine kollaborative Anwendung begünstigen. Während die Leistungsfähigkeit der Geräte stark schwanken kann, sollten die dort laufenden Visualisierungen konsistent sein. Das Anwendungsfeld sollte bestimmen, wie und auf welchem Gerät Benutzer auf die Anwendung zugreifen, nicht die technischen Voraussetzungen zur Erzeugung der 3D-Grafik. Das Ziel dieser Thesis ist es, die diversen Herausforderungen zu untersuchen, die bei der Bereitstellung von konsistenten und skalierbaren Visualisierungsanwendungen auf heterogenen Plattformen eine Rolle spielen. Während wir nicht die Vielzahl an möglichen Anwendungsfällen abdecken konnten, haben wir eine repräsentative Auswahl an Rendering-Architekturen in den Kernbereichen wissenschaftliche Visualisierung, web-basierte 3D-Anwendungen, und virtuelle Filmproduktion entwickelt. Um die geforderte Qualität, Leistung, und Skalierbarkeit für verschiedene Client-Geräte und -Anzeigen zu gewährleisten, fokussieren sich unsere Architekturen auf die effiziente Nutzung und Kombination der verfügbaren Client-, Server-, und Netzwerkressourcen. Wir präsentieren innovative Lösungen, die hybrides und verteiltes Rendering als auch das Verwalten der Datensätze und Streaming der 3D-Ausgabe umfassen. Wir etablieren den Web-Browser als vielversprechende Plattform für zugängliche und portierbare Visualisierungsdienste. Um die Verwendbarkeit unserer Technologie in realitätsnahen Szenarien zu testen, haben wir mit Experten aus der Medizin und Filmindustrie zusammengearbeitet. Unsere Architekturen erreichen eine umfassende Abdeckung von Anzeige- und Rendering-Szenarien und teilen sich gleichzeitig wesentliche Komponenten und Konzepte. Sie bilden daher eine starke Grundlage für ein einheitliches System, das eine Vielzahl an Anwendungsfällen unterstützt

    Across Space and Time. Papers from the 41st Conference on Computer Applications and Quantitative Methods in Archaeology, Perth, 25-28 March 2013

    Get PDF
    This volume presents a selection of the best papers presented at the forty-first annual Conference on Computer Applications and Quantitative Methods in Archaeology. The theme for the conference was "Across Space and Time", and the papers explore a multitude of topics related to that concept, including databases, the semantic Web, geographical information systems, data collection and management, and more

    Dual reality framework : enabling technologies for monitoring and controlling cyber-physical environments

    Get PDF
    Diese Promotionsarbeit untersucht die Thematik des Monitoring und der Steuerung von Cyber-Physischen Umgebungen (CPE). In diesem Zusammenhang wird das Konzept und die Umsetzung eines Dual Reality (DR) Frameworks präsentiert, welches sich aus zwei Komponenten zusammensetzt: dem Dual Reality Management Dashboard (DURMAD) zur interaktiven dreidimensionalen Visualisierung von CPE und dem Event Broadcasting Service (EBS), einer modularen Kommunikationsinfrastruktur. Hierbei stellt DURMAD basierend auf dem DR-Konzept den aktuellen Status der Umgebung in einem 3D-Modell visuell dar. Gleichzeitig umfasst es weitere Auswertungs- und Darstellungsformen, welche verschiedene Formen der Entscheidungsunterstützung für Manager der Umgebung bieten. Speziell entwickelte Filtermechanismen für den EBS ermöglichen eine Vorverarbeitung der Informationen vor dem Versenden bzw. nach dem Empfangen von Events. Durch offene Strukturen können externe Applikationen an das DR-Framework angeschlossen werden. Dies wird anhand von Objektgedächtnissen, semantischen Beschreibungen und Prozessmodellen präsentiert. Basierend auf einer Formalisierung von Dual Reality wird der Begriff Erweiterte Dual Reality (DR++) definiert, welcher auch die Auswirkungen von Simulationen in DR-Applikationen umfasst. Durch eine Integration des DR-Frameworks in das Innovative Retail Laboratory werden die Potenziale der erarbeiteten Konzepte anhand einer beispielhaften Implementierung in der Einzelhandelsdomäne aufgezeigt.Within the scope of this dissertation, the issues of monitoring and control of Cyber-Physical Environments (CPE) have been investigated. In this context, the concept and implementation of a Dual Reality (DR) framework is presented, consisting of two components: the Dual Reality Management Dashboard (DURMAD) for interactive three-dimensional visualization of instrumented environments and the Event Broadcasting Service (EBS), a modular communication infrastructure. DURMAD is based on the DR-concept and thus visually represents the current status of the environment in a 3D model. Simultaneously, it includes more analysis and presentation tools providing various forms of decision-making support for managers of these environments. Specially developed filter mechanisms for the EBS allow preprocessing of the information before sending or after receiving events. By means of open structures external applications can be connected to the DR framework. This is pointed out by digital object memories, semantic descriptions and process models. Based on a formalization of Dual Reality, the term Advanced Dual Reality (DR ++) is defined, which includes the impact of simulations in DR applications. By integrating the DR framework in the Innovative Retail Laboratory, the potential of the developed concepts on the basis of an exemplary implementation in the retail domain are shown

    Across Space and Time Papers from the 41st Conference on Computer Applications and Quantitative Methods in Archaeology, Perth, 25-28 March 2013

    Get PDF
    The present volume includes 50 selected peer-reviewed papers presented at the 41st Computer Applications and Quantitative Methods in Archaeology Across Space and Time (CAA2013) conference held in Perth (Western Australia) in March 2013 at the University Club of Western Australia and hosted by the recently established CAA Australia National Chapter. It also hosts a paper presented at the 40th Computer Applications and Quantitative Methods in Archaeology (CAA2012) conference held in Southampton
    corecore