26 research outputs found

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    TOWARDS EFFECTIVE DISPLAYS FOR VIRTUAL AND AUGMENTED REALITY

    Get PDF
    Virtual and augmented reality (VR and AR) are becoming increasingly accessible and useful nowadays. This dissertation focuses on several aspects of designing effective displays for VR and AR. Compared to conventional desktop displays, VR and AR displays can better engage the human peripheral vision. This provides an opportunity for more information to be perceived. To fully leverage the human visual system, we need to take into account how the human visual system perceives things differently in the periphery than in the fovea. By investigating the relationship of the perception time and eccentricity, we deduce a scaling function which facilitates content in the far periphery to be perceived as efficiently as in the central vision. AR overlays additional information on the real environment. This is useful in a number of fields, including surgery, where time-critical information is key. We present our medical AR system that visualizes the occluded catheter in the external ventricular drainage (EVD) procedure. We develop an accurate and efficient catheter tracking method that requires minimal changes to the existing medical equipment. The AR display projects a virtual image of the catheter overlaid on the occluded real catheter to depict its real-time position. Our system can make the risky EVD procedure much safer. Existing VR and AR displays support a limited number of focal distances, leading to vergence-accommodation conflict. Holographic displays can address this issue. In this dissertation, we explore the design and development of nanophotonic phased array (NPA) as a special class of holographic displays. NPAs have the advantage of being compact and support very high refresh rates. However, the use of the thermo-optic effect for phase modulation renders them susceptible to the thermal proximity effect. We study how the proximity effect impacts the images formed on NPAs. We then propose several novel algorithms to compensate for the thermal proximity effect on NPAs and compare their effectiveness and computational efficiency. Computer-generated holography (CGH) has traditionally focused on 2D images and 3D images in the form of meshes and point clouds. However, volumetric data can also benefit from CGH. One of the challenges in the use of volumetric data sources in CGH is the computational complexity needed to calculate the holograms of volumetric data. We propose a new method that achieves a significant speedup compared to existing holographic volume rendering methods

    An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) enable geographically dispersed users to interact with each other in a virtual environment. A key factor to the success of a DIA is the maintenance of a consistent view of the shared virtual world for all the participants. However, maintaining consistent states in DIAs is difficult under real networks. State changes communicated by messages over such networks suffer latency leading to inconsistency across the application. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the number of messages transmitted in return for perceptually tolerable inconsistency. This thesis examines the operation of PCMs using concepts and methods derived from information theory. This information theory perspective results in a novel information model of PCMs that quantifies and analyzes the efficiency of such methods in communicating the reduced state information, and a new adaptive multiple-model-based framework for improving consistency in DIAs. The first part of this thesis introduces information measurements of user behavior in DIAs and formalizes the information model for PCM operation. In presenting the information model, the statistical dependence in the entity state, which makes using extrapolation models to predict future user behavior possible, is evaluated. The efficiency of a PCM to exploit such predictability to reduce the amount of network resources required to maintain consistency is also investigated. It is demonstrated that from the information theory perspective, PCMs can be interpreted as a form of information reduction and compression. The second part of this thesis proposes an Information-Based Dynamic Extrapolation Model for dynamically selecting between extrapolation algorithms based on information evaluation and inferred network conditions. This model adapts PCM configurations to both user behavior and network conditions, and makes the most information-efficient use of the available network resources. In doing so, it improves PCM performance and consistency in DIAs

    Interactive Technologies for the Public Sphere Toward a Theory of Critical Creative Technology

    Get PDF
    Digital media cultural practices continue to address the social, cultural and aesthetic contexts of the global information economy, perhaps better called ecology, by inventing new methods and genres that encourage interactive engagement, collaboration, exploration and learning. The theoretical framework for creative critical technology evolved from the confluence of the arts, human computer interaction, and critical theories of technology. Molding this nascent theoretical framework from these seemingly disparate disciplines was a reflexive process where the influence of each component on each other spiraled into the theory and practice as illustrated through the Constructed Narratives project. Research that evolves from an arts perspective encourages experimental processes of making as a method for defining research principles. The traditional reductionist approach to research requires that all confounding variables are eliminated or silenced using methods of statistics. However, that noise in the data, those confounding variables provide the rich context, media, and processes by which creative practices thrive. As research in the arts gains recognition for its contributions of new knowledge, the traditional reductive practice in search of general principles will be respectfully joined by methodologies for defining living principles that celebrate and build from the confounding variables, the data noise. The movement to develop research methodologies from the noisy edges of human interaction have been explored in the research and practices of ludic design and ambiguity (Gaver, 2003); affective gap (Sengers et al., 2005b; 2006); embodied interaction (Dourish, 2001); the felt life (McCarthy & Wright, 2004); and reflective HCI (Dourish, et al., 2004). The theory of critical creative technology examines the relationships between critical theories of technology, society and aesthetics, information technologies and contemporary practices in interaction design and creative digital media. The theory of critical creative technology is aligned with theories and practices in social navigation (Dourish, 1999) and community-based interactive systems (Stathis, 1999) in the development of smart appliances and network systems that support people in engaging in social activities, promoting communication and enhancing the potential for learning in a community-based environment. The theory of critical creative technology amends these community-based and collaborative design theories by emphasizing methods to facilitate face-to-face dialogical interaction when the exchange of ideas, observations, dreams, concerns, and celebrations may be silenced by societal norms about how to engage others in public spaces. The Constructed Narratives project is an experiment in the design of a critical creative technology that emphasizes the collaborative construction of new knowledge about one's lived world through computer-supported collaborative play (CSCP). To construct is to creatively invent one's world by engaging in creative decision-making, problem solving and acts of negotiation. The metaphor of construction is used to demonstrate how a simple artefact - a building block - can provide an interactive platform to support discourse between collaborating participants. The technical goal for this project was the development of a software and hardware platform for the design of critical creative technology applications that can process a dynamic flow of logistical and profile data from multiple users to be used in applications that facilitate dialogue between people in a real-time playful interactive experience

    Blickpunktabhängige Computergraphik

    Get PDF
    Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die Realität hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit führt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhängige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trägt zu vier Bereichen blickpunktabhängiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle Qualität von Videos für den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit Unterstützung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle Realität (VR) und Erweiterte Realität (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhängigen Rendern nutzt. Die Qualität des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells für jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand für das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhängige Algorithmen die Darstellungsqualität von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender Bildqualität der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lässt

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Book of short Abstracts of the 11th International Symposium on Digital Earth

    Get PDF
    The Booklet is a collection of accepted short abstracts of the ISDE11 Symposium
    corecore