118 research outputs found

    Predictive Rendering

    Get PDF

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Vision-Aided Autonomous Precision Weapon Terminal Guidance Using a Tightly-Coupled INS and Predictive Rendering Techniques

    Get PDF
    This thesis documents the development of the Vision-Aided Navigation using Statistical Predictive Rendering (VANSPR) algorithm which seeks to enhance the endgame navigation solution possible by inertial measurements alone. The eventual goal is a precision weapon that does not rely on GPS, functions autonomously, thrives in complex 3-D environments, and is impervious to jamming. The predictive rendering is performed by viewpoint manipulation of computer-generated of target objects. A navigation solution is determined by an Unscented Kalman Filter (UKF) which corrects positional errors by comparing camera images with a collection of statistically significant virtual images. Results indicate that the test algorithm is a viable method of aiding an inertial-only navigation system to achieve the precision necessary for most tactical strikes. On 14 flight test runs, the average positional error was 166 feet at endgame, compared with an inertial-only error of 411 feet

    Using Predictive Rendering as a Vision-Aided Technique for Autonomous Aerial Refueling

    Get PDF
    This research effort seeks to characterize a vision-aided approach for an Unmanned Aerial System (UAS) to autonomously determine relative position to another aircraft in a formation, specifically to address the autonomous aerial refueling problem. A system consisting of a monocular digital camera coupled with inertial sensors onboard the UAS is analyzed for feasibility of using this vision-aided approach. A three-dimensional rendering of the tanker aircraft is used to generate predicted images of the tanker as seen by the receiver aircraft. A rigorous error model is developed to model the relative dynamics between an INS-equipped receiver and the tanker aircraft. A thorough image processing analysis is performed to determine error observability between the predicted and true images using sum-squared difference and gradient techniques. To quantify the errors between the predicted and true images, an image update function is developed using perturbation techniques. Based on this residual measurement and the inertial/dynamics propagation, an Extended Kalman Filter (EKF) is used to predict the relative position and orientation of the tanker from the receiver aircraft. The EKF is simulated through various formation positions during typical aerial refueling operations. Various grades of inertial sensors are simulated during different trajectories to analyze the system\u27s ability to accurately and robustly track the relative position between the two aircraft

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    A Dual-Beam Method-of-Images 3D Searchlight BSSRDF

    Full text link
    We present a novel BSSRDF for rendering translucent materials. Angular effects lacking in previous BSSRDF models are incorporated by using a dual-beam formulation. We employ a Placzek's Lemma interpretation of the method of images and discard diffusion theory. Instead, we derive a plane-parallel transformation of the BSSRDF to form the associated BRDF and optimize the image confiurations such that the BRDF is close to the known analytic solutions for the associated albedo problem. This ensures reciprocity, accurate colors, and provides an automatic level-of-detail transition for translucent objects that appear at various distances in an image. Despite optimizing the subsurface fluence in a plane-parallel setting, we find that this also leads to fairly accurate fluence distributions throughout the volume in the original 3D searchlight problem. Our method-of-images modifications can also improve the accuracy of previous BSSRDFs.Comment: added clarifying text and 1 figure to illustrate the metho

    Image Dependent Relative Formation Navigation for Autonomous Aerial Refueling

    Get PDF
    This research tests the feasibility, accuracy, and reliability of a predictive rendering and holistic comparison algorithm with use of an optical sensor to provide relative distance and position behind a lead or tanker aircraft. Using an accurate model of a tanker, an algorithm renders image(s) for comparison with actual collected images by a camera installed on the receiver aircraft. Based on this comparison, information used to create the rendered image(s) is used to provide the relative navigation solution required for autonomous air refueling. Given enough predicted images and processing time, this approach should reliably find an accurate solution. Building on previous work, this research aims to minimize the number of required rendered images to provide a real-time navigational solution with sufficient accuracy for an auto-pilot controller installed on future Unmanned Aircraft Systems

    Detecting Bias in Monte Carlo Renderers using Welch’s t-test

    Get PDF
    When checking the implementation of a new renderer, one usually compares the output to that of a reference implementation. However, such tests require a large number of samples to be reliable, and sometimes they are unable to reveal very subtle differences that are caused by bias, but overshadowed by random noise. We propose using Welch’s t-test, a statistical test that reliably finds small bias even at low sample counts. Welch’s t-test is an established method in statistics to determine if two sample sets have the same underlying mean, based on sample statistics. We adapt it to test whether two renderers converge to the same image, i.e., the same mean per pixel or pixel region. We also present two strategies for visualizing and analyzing the test’s results, assisting us in localizing especially problematic image regions and detecting biased implementations with high confidence at low sample counts both for the reference and tested implementation

    Collaborative rendering over peer-to-peer networks

    Get PDF
    Physically-based high-fidelity rendering pervades areas like engineering, architecture, archaeology and defence, amongs others. The computationally intensive algorithms required for such visualisation benefit greatly from added computational resources when exploiting parallelism. In scenarios where multiple users roam around the same virtual scene, and possibly interact with one another, complex visualisation of phenomena like global illumination are traditionally computed and duplicated at each and every client, or centralised and computed at a single very powerful server. In this paper, we introduce the concept of collaborative high-fidelity rendering over peer-to-peer networks, which aims to reduce redundant computation via collaboration in an environment where client machines are volatile and may join or leave the network at any time.peer-reviewe
    • …
    corecore