35 research outputs found

    Analysis of Visualisation and Interaction Tools Authors

    Get PDF
    This document provides an in-depth analysis of visualization and interaction tools employed in the context of Virtual Museum. This analysis is required to identify and design the tools and the different components that will be part of the Common Implementation Framework (CIF). The CIF will be the base of the web-based services and tools to support the development of Virtual Museums with particular attention to online Virtual Museum.The main goal is to provide to the stakeholders and developers an useful platform to support and help them in the development of their projects, despite the nature of the project itself. The design of the Common Implementation Framework (CIF) is based on an analysis of the typical workflow ofthe V-MUST partners and their perceived limitations of current technologies. This document is based also on the results of the V-MUST technical questionnaire (presented in the Deliverable 4.1). Based on these two source of information, we have selected some important tools (mainly visualization tools) and services and we elaborate some first guidelines and ideas for the design and development of the CIF, that shall provide a technological foundation for the V-MUST Platform, together with the V-MUST repository/repositories and the additional services defined in the WP4. Two state of the art reports, one about user interface design and another one about visualization technologies have been also provided in this document

    Vice : an interface designed for complex engineering software : an application of virtual reality

    Get PDF
    Concurrent Engineering has been taking place within the manufacturing industry for many years whereas the construction industry has until recently continued using the 'over the wall' approach where each task is completed before the next began. For real concurrent engineering in construction to take place there needs to be true collaborative working between client representatives, construction professionals, suppliers and subcontractors. The aim of this study was to design, develop and test a new style of user interface which promotes a more intuitive form of interaction than the standard desktop metaphor based interface. This new interface has been designed as an alternative for the default interface of the INTEGRA system and must also promote enhanced user collaboration. By choosing alternative metaphors that are more obvious to the user it is postulated that it should be possible for such an interface to be developed. Specific objectives were set that would allow the project aim to be fulfilled. These objectives are outlined below: To gain a better understanding of the requirements of successful concurrent engineering particularly at the conceptual design phase. To complete a thorough review of the current interfaces had to take place including any guidelines on how to create a "good user interface". To experience many of the collaboration systems available today so that an informed choice of application can be made. To learn the relevant skills required to design, produce and implement the interface of choice. To perform a user evaluation of the finished user interface to improve overall usability and further streamline the concurrent conceptual design. The user interface developed used a virtual reality environment to create a metaphor of an office building. Project members could then coexist and interact within the building promoting collaboration and at the same time have access to the remaining INTEGRA tools. The user evaluation proved that the Virtual Integrated Collaborative Environment (VICE) user interface was a successful addition to the INTEGRA system. The system was evaluated by a substantial number of different users which validates this finding. The user evaluation also provided positive results from two different demographics concluding that the system was easy, intuitive to use with the necessary functionality. Using metaphor based user interfaces is not a new concept. It has become standard practise for most software developers. There are arguments for and against these types of user interfaces. Some advanced users will argue that having such an interface limits their ability to make full use of the applications. However the majority of users do not come within this bracket and for them, metaphor based user interfaces are very useful. This is again evident from the user evaluation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Interactive mixed reality rendering in a distributed ray tracing framework

    Get PDF
    The recent availability of interactive ray tracing opened the way for new applications and for improving existing ones in terms of quality. Since today CPUs are still too slow for this purpose, the necessary computing power is obtained by connecting a number of machines and using distributed algorithms. Mixed reality rendering - the realm of convincingly combining real and virtual parts to a new composite scene - needs a powerful rendering method to obtain a photorealistic result. The ray tracing algorithm thus provides an excellent basis for photorealistic rendering and also advantages over other methods. It is worth to explore its abilities for interactive mixed reality rendering. This thesis shows the applicability of interactive ray tracing for mixed (MR) and augmented reality (AR) applications on the basis of the OpenRT framework. Two extensions to the OpenRT system are introduced and serve as basic building blocks: streaming video textures and in-shader AR view compositing. Streaming video textures allow for inclusion of the real world into interactive applications in terms of imagery. The AR view compositing mechanism is needed to fully exploit the advantages of modular shading in a ray tracer. A number of example applications from the entire spectrum of the Milgram Reality-Virtuality continuum illustrate the practical implications. An implementation of a classic AR scenario, inserting a virtual object into live video, shows how a differential rendering method can be used in combination with a custom build real-time lightprobe device to capture the incident light and include it into the rendering process to achieve convincing shading and shadows. Another field of mixed reality rendering is the insertion of real actors into a virtual scene in real-time. Two methods - video billboards and a live 3D visual hull reconstruction - are discussed. The implementation of live mixed reality systems is based on a number of technologies beside rendering and a comprehensive understanding of related methods and hardware is necessary. Large parts of this thesis hence deal with the discussion of technical implementations and design alternatives. A final summary discusses the benefits and drawbacks of interactive ray tracing for mixed reality rendering.Die Verfügbarkeit von interaktivem Ray-Tracing ebnet den Weg für neue Anwendungen, aber auch für die Verbesserung der Qualität bestehener Methoden. Da die heute verfügbaren CPUs noch zu langsam sind, ist es notwendig, mehrere Maschinen zu verbinden und verteilte Algorithmen zu verwenden. Mixed Reality Rendering - die Technik der überzeugenden Kombination von realen und synthetischen Teilen zu einer neuen Szene - braucht eine leistungsfähige Rendering-Methode um photorealistische Ergebnisse zu erzielen. Der Ray-Tracing-Algorithmus bietet hierfür eine exzellente Basis, aber auch Vorteile gegenüber anderen Methoden. Es ist naheliegend, die Möglichkeiten von Ray-Tracing für Mixed-Reality-Anwendungen zu erforschen. Diese Arbeit zeigt die Anwendbarkeit von interaktivem Ray-Tracing für Mixed-Reality (MR) und Augmented-Reality (AR) Anwendungen anhand des OpenRT-Systems. Zwei Erweiterungen dienen als Grundbausteine: Videotexturen und In-Shader AR View Compositing. Videotexturen erlauben die reale Welt in Form von Bildern in den Rendering-Prozess mit einzubeziehen. Der View-Compositing-Mechanismus is notwendig um die Modularität einen Ray-Tracers voll auszunutzen. Eine Reihe von Beispielanwendungen von beiden Enden des Milgramschen Reality-Virtuality-Kontinuums verdeutlichen die praktischen Aspekte. Eine Implementierung des klassischen AR-Szenarios, das Einfügen eines virtuellen Objektes in eine Live-Übertragung zeigt, wie mittels einer Differential Rendering Methode und einem selbstgebauten Gerät zur Erfassung des einfallenden Lichts realistische Beleuchtung und Schatten erzielt werden können. Ein anderer Anwendungsbereich ist das Einfügen einer realen Person in eine künstliche Szene. Hierzu werden zwei Methoden besprochen: Video-Billboards und eine interaktive 3D Rekonstruktion. Da die Implementierung von Mixed-Reality-Anwendungen Kentnisse und Verständnis einer ganzen Reihe von Technologien nebem dem eigentlichen Rendering voraus setzt, ist eine Diskussion der technischen Grundlagen ein wesentlicher Bestandteil dieser Arbeit. Dies ist notwenig, um die Entscheidungen für bestimmte Designalternativen zu verstehen. Den Abschluss bildet eine Diskussion der Vor- und Nachteile von interaktivem Ray-Tracing für Mixed Reality Anwendungen

    Vice : An interface designed for complex engineering software : An application of virtual reality

    Get PDF
    Concurrent Engineering has been taking place within the manufacturing industry for many years whereas the construction industry has until recently continued using the 'over the wall' approach where each task is completed before the next began. For real concurrent engineering in construction to take place there needs to be true collaborative working between client representatives, construction professionals, suppliers and subcontractors. The aim of this study was to design, develop and test a new style of user interface which promotes a more intuitive form of interaction than the standard desktop metaphor based interface. This new interface has been designed as an alternative for the default interface of the INTEGRA system and must also promote enhanced user collaboration. By choosing alternative metaphors that are more obvious to the user it is postulated that it should be possible for such an interface to be developed. Specific objectives were set that would allow the project aim to be fulfilled. These objectives are outlined below: To gain a better understanding of the requirements of successful concurrent engineering particularly at the conceptual design phase. To complete a thorough review of the current interfaces had to take place including any guidelines on how to create a "good user interface". To experience many of the collaboration systems available today so that an informed choice of application can be made. To learn the relevant skills required to design, produce and implement the interface of choice. To perform a user evaluation of the finished user interface to improve overall usability and further streamline the concurrent conceptual design. The user interface developed used a virtual reality environment to create a metaphor of an office building. Project members could then coexist and interact within the building promoting collaboration and at the same time have access to the remaining INTEGRA tools. The user evaluation proved that the Virtual Integrated Collaborative Environment (VICE) user interface was a successful addition to the INTEGRA system. The system was evaluated by a substantial number of different users which validates this finding. The user evaluation also provided positive results from two different demographics concluding that the system was easy, intuitive to use with the necessary functionality. Using metaphor based user interfaces is not a new concept. It has become standard practise for most software developers. There are arguments for and against these types of user interfaces. Some advanced users will argue that having such an interface limits their ability to make full use of the applications. However the majority of users do not come within this bracket and for them, metaphor based user interfaces are very useful. This is again evident from the user evaluation

    Three-dimensional interactive maps: theory and practice

    Get PDF

    Design and implementation of distributed interactive virtual environment.

    Get PDF
    Chan Ming-fei.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 63-66).Abstract --- p.iAcknowledgments --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Challenging Issues --- p.2Chapter 1.2 --- Previous Work --- p.4Chapter 1.3 --- Organization of the Thesis --- p.5Chapter 2 --- Distributed Virtual Environment --- p.6Chapter 2.1 --- Possible Architectures --- p.6Chapter 2.2 --- Representations of Clients as Avatars --- p.7Chapter 2.3 --- Dynamic Membership --- p.9Chapter 3 --- Bandwidth and Computation Reduction Techniques --- p.11Chapter 3.1 --- Network Communication --- p.12Chapter 3.2 --- Dead Reckoning --- p.13Chapter 3.3 --- Message Aggregation --- p.15Chapter 3.3.1 --- Network-Based Aggregation --- p.15Chapter 3.3.2 --- Organization-Based Aggregations --- p.16Chapter 3.3.3 --- Grid-Based Aggregations --- p.16Chapter 3.4 --- Relevance Filtering --- p.17Chapter 3.4.1 --- Entity-Based Filtering --- p.17Chapter 3.4.2 --- Grid-Based Filtering --- p.19Chapter 3.5 --- Quiescent Entities --- p.20Chapter 3.6 --- Spatial Partitioning --- p.21Chapter 3.6.1 --- Necessity of Spatial Partitioning --- p.22Chapter 3.6.2 --- Binary Space Partitioning Tree --- p.23Chapter 3.6.3 --- BSP Tree Construction --- p.23Chapter 4 --- Partitioning Algorithm --- p.25Chapter 4.1 --- Problem Formulation --- p.25Chapter 4.2 --- Exhaustive Partition (EP) Algorithm --- p.28Chapter 4.3 --- Partitioning Algorithm --- p.29Chapter 4.3.1 --- Recursive Bisection Partition (RBP) Algorithm --- p.30Chapter 4.3.2 --- Layering Partitioning (LP) Algorithm --- p.32Chapter 4.3.3 --- Communication Refinement Partitioning (CRP) Algorithm --- p.38Chapter 4.4 --- Parallel Approach --- p.42Chapter 4.5 --- Further Observation --- p.43Chapter 5 --- Experiments --- p.44Chapter 5.1 --- Experiment 1: Small Virtual World --- p.45Chapter 5.2 --- Experiment 2: Large Virtual World --- p.46Chapter 5.3 --- Experiment 3: Moving of Avatars --- p.47Chapter 5.4 --- Experiment 4: Dynamic Joining and Leaving --- p.48Chapter 5.5 --- Experiment 5: Parallel Approach --- p.49Chapter 6 --- Implementation Considerations --- p.55Chapter 6.1 --- Different Environments --- p.55Chapter 6.2 --- Platform --- p.56Chapter 6.3 --- Lessons learned --- p.57Chapter 7 --- Conclusion --- p.59A Simplex Method --- p.60Bibliography --- p.6

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Achieving efficient real-time virtual reality architectural visualisation

    Get PDF
    Master'sMASTER OF ARTS (ARCHITECTURE
    corecore