212 research outputs found

    Large object segmentation with region priority rendering

    Get PDF
    The Address Recalculation Pipeline is a hardware architecture designed to reduce the end-to-end latency suffered by immersive Head Mounted Display virtual reality systems. A demand driven rendering technique known as priority rendering was devised for use in conjunction with the address recalculation pipeline. Using this technique, different sections of a scene can be updated at different rates, resulting in reductions to the rendering load.Further reductions can potentially be achieved by allowing for the segmenting of large objects. However in doing so a tearing problem surfaces, which has to be overcome before large object segmentation can be used effectively in priority rendering. This paper demonstrates a way of organizing virtual world objects for priority rendering, as well as a method to hide scene tearing artefacts due to object segmentation

    The ARP Virtual Reality System in Addressing Security Threats and Disaster Scenarios

    Get PDF
    Nations, corporations and political organizations around the world today are forced to deal with an increasing number of security threats. As a result, various organizations must find ways to adequately equip and prepare themselves to handle numerous dangerous and life threatening circumstances. Virtual reality is an extremely important technology that can be used across a variety of different fields and for a number of diverse applications, ranging from simulation training to visualization tools, in order to prepare for and manage disaster situations. Head mounted display (HMD) virtual reality systems attempt visually to immerse the user in a virtual environment. However, it is well recognized that latency, the delay in responding to a user\u27s head movement, is a major shortcoming that plagues immersive HMD virtual reality systems. Excessive latency destroys the illusion of reality that such systems attempt to present to the user. A hardware architecture known as the address recalculation pipeline (ARP) and a computer graphics rendering technique called priority rendering, were designed to reduce the end-to-end latency suffered by immersive HMD virtual reality systems. This paper discusses the benefits of using the ARP virtual reality system in addressing security threats and disaster situations

    Low-Latency Rendering With Dataflow Architectures

    Get PDF
    Recent years have seen a resurgence of virtual reality (VR), sparked by the repurposing of low-cost COTS components. VR aims to generate stimuli that appear to come from a source other than the interface through which they are delivered. The synthetic stimuli replace real-world stimuli, and transport the user to another, perhaps imaginary, “place.” To do this, we must overcome many challenges, often related to matching the synthetic stimuli to the expectations and behavior of the real world. One way in which the stimuli can fail is its latency–– the time between a user's action and the computer's response. We constructed a novel VR renderer, that optimized latency above all else. Our prototype allowed us to explore how latency affects human–computer interaction. We had to completely reconsider the interaction between time, space, and synchronization on displays and in the traditional graphics pipeline. Using a specialized architecture––dataflow computing––we combined consumer, industrial, and prototype components to create an integrated 1:1 room-scale VR system with a latency of under 3 ms. While this was prototype hardware, the considerations in achieving this performance inform the design of future VR pipelines, and our human factors studies have provided new and sometimes surprising contributions to the body of knowledge on latency in HCI

    Low Latency Displays for Augmented Reality

    Get PDF
    The primary goal for Augmented Reality (AR) is bringing the real and virtual together into a common space. Maintaining this illusion, however, requires preserving spatially and temporally consistent registration despite changes in user or object pose. The greatest source of registration error is latency—the delay between when something moves and the display changes in response—which breaks temporal consistency. Furthermore, the real world varies greatly in brightness; ranging from bright sunlight to deep shadows. Thus, a compelling AR system must also support High-Dynamic Range (HDR) to maintain its virtual objects’ appearance both spatially and temporally consistent with the real world. This dissertation presents new methods, implementations, results (both visual and performance), and future steps for low latency displays, primarily in the context of Optical See-through Augmented Reality (OST-AR) Head-Mounted Displays, focusing on temporal consistency in registration, HDR color support, and spatial and temporal consistency in brightness: 1. For registration temporal consistency, the primary insight is breaking the conventional display paradigm: computers render imagery, frame by frame, and then transmit it to the display for emission. Instead, the display must also contribute towards rendering by performing a post-rendering, post-transmission warp of the computer-supplied imagery in the display hardware. By compensating in the display for system latency by using the latest tracking information, much of the latency can be short-circuited. Furthermore, the low latency display must support ultra-high frequency (multiple kHz) refreshing to minimize pose displacement between updates. 2. For HDR color support, the primary insight is developing new display modulation techniques. DMDs, a type of ultra-high frequency display, emit binary output, which require modulation to produce multiple brightness levels. Conventional modulation breaks low latency guarantees, and modulation of bright LEDs illuminators at frequencies to support kHz-order HDR exceeds their capabilities. Thus one must directly synthesize the necessary variation in brightness. 3. For spatial and temporal brightness consistency, the primary insight is integrating HDR light sensors into the display hardware: the same processes which both compensate for latency and generate HDR output can also modify it in response to the spatially sensed brightness of the real world.Doctor of Philosoph

    Effective Performance Analysis and Debugging

    Get PDF
    Performance is once again a first-class concern. Developers can no longer wait for the next generation of processors to automatically optimize their software. Unfortunately, existing techniques for performance analysis and debugging cannot cope with complex modern hardware, concurrent software, or latency-sensitive software services. While processor speeds have remained constant, increasing transistor counts have allowed architects to increase processor complexity. This complexity often improves performance, but the benefits can be brittle; small changes to a program’s code, inputs, or execution environment can dramatically change performance, resulting in unpredictable performance in deployed software and complicating performance evaluation and debugging. Developers seeking to improve performance must resort to manual performance tuning for large performance gains. Software profilers are meant to guide developers to important code, but conventional profilers do not produce actionable information for concurrent applications. These profilers report where a program spends its time, not where optimizations will yield performance improvements. Furthermore, latency is a critical measure of performance for software services and interactive applications, but conventional profilers measure only throughput. Many performance issues appear only when a system is under high load, but generating this load in development is often impossible. Developers need to identify and mitigate scalability issues before deploying software, but existing tools offer developers little or no assistance. In this dissertation, I introduce an empirically-driven approach to performance analysis and debugging. I present three systems for performance analysis and debugging. Stabilizer mitigates the performance variability that is inherent in modern processors, enabling both predictable performance in deployment and statistically sound performance evaluation. Coz conducts performance experiments using virtual speedups to create the effect of an optimization in a running application. This approach accurately predicts the effect of hypothetical optimizations, guiding developers to code where optimizations will have the largest effect. Amp allows developers to evaluate system scalability using load amplification to create the effect of high load in a testing environment. In combination, Amp and Coz allow developers to pinpoint code where manual optimizations will improve the scalability of their software

    Characteristics of models that impact transformation of BIMs to virtual environments to support facility management operations

    Get PDF
    Building information models (BIMs) have been used by the Architectural/Engineering/Construction (AEC) industry with a focus on storing and exchanging digital information about building components. However, the untapped potential of BIMs in facility operations and the experience of facility operators while they interact with digital building information have not been understood widely. One of the underlying bottlenecks in the use of BIMs in the FM phase is the lack of interactions with components to easily access information of interest, and the lack of ways to navigate in models with full spatial understanding. Virtual environments (VEs), which represent physical spaces digitally in virtual worlds, enable interactions with virtual components to access information with spatial understanding. The underlying challenges in the conversion of BIMs to VE hinder a streamlined process. This paper provides a detailed analysis of building size, geometric complexities of discipline models and level of geometric granularity as factors contributing to inefficient transformation of BIMs to VE. The paper also provides research findings on a set of computational approaches such as polygon reduction and occlusion culling to overcome challenges and improve the data transfer faced in converting BIMs into VEs over a range and size of facility models
    • …
    corecore