3 research outputs found

    Tauray: A Scalable Real-Time Open-Source Path Tracer for Stereo and Light Field Displays

    Get PDF
    Light field displays represent yet another step in continually increasing pixel counts. Rendering realistic real-time 3D content for them with ray tracing-based methods is a major challenge even accounting for recent hardware acceleration features, as renderers have to scale to tens to hundreds of distinct viewpoints. To this end, we contribute an open-source, cross-platform real-time 3D renderer called Tauray. The primary focus of Tauray is in using photorealistic path tracing techniques to generate real-time content for multi-view displays, such as VR headsets and light field displays; this aspect is generally overlooked in existing renderers. Ray tracing hardware acceleration as well as multi-GPU rendering is supported. We compare Tauray to other open source real-time path tracers, like Lighthouse 2, and show that it can meet or significantly exceed their performance.publishedVersionPeer reviewe

    Contemporary Specular Denoising Algorithms in Real-Time Path Tracing

    No full text
    Path tracing has become a standard technique in photorealistic rendering for movie production and other offline use cases. With the advent of hardware-accelerated ray tracing in recent years, path tracing has started to make its way into real-time rendering as well. Most new games released today incorporate some ray tracing-based effects, such as ray traced global illumination, reflections or shadows. However, real-time applications have a hard execution time constraint. Typically, a single frame has to be rendered in under 1/60 of a second to achieve a consistent frame rate of 60 fps or more. This means that the number of rays that can be traced per frame is very limited, typically only a handful of rays per pixel. This results in very noisy images due to the stochastic nature of the path tracing algorithm. Denoising filters are used to try and reconstruct a clean image from a low sample count noisy input image. In this work, two contemporary denoising algorithms, Spatiotemporal VarianceGuided Filtering (SVGF) and Blockwise Multi-Order Feature Regression (BMFR) were reimplemented and evaluated in terms of visual quality and execution time using modern hardware with ray tracing support. The results in terms of visual quality for both algorithms were similar. The execution times for both algorithms were under 2 milliseconds at resolution of 1280 × 720 in all tests, which is well within a reasonable budget for use in real-time rendering. However, in terms of objective quality, measured using Root Mean Squared Error (RMSE) and Structural Similarity Index Measure (SSIM), both algorithms performed worse than the original papers. A likely cause for this gap is the difference in materials, making specular lighting more dominant in our test scenes. Based on the experiments, specular lighting causes major problems for both SVGF and BMFR. In particular, rough materials cause very high amounts of noise in the output, making it harder to denoise. Temporal accumulation for specular lighting is also challenging, as the primary ray hit point does not have information about the reflected surface. Several methods for computing believable specular reflections in real-time have been published. However, most methods use a variety of hacks to reduce the amount of noise from specular reflections, trading physical accuracy for less noise. For example, specular reflections for surfaces with high roughness are typically not ray traced at all, and are instead approximated using techniques such as spherical harmonics or radiance caching. We conclude that a robust denoising algorithm capable of reconstructing specular illumination at an adequate quality for production for surfaces of varying roughness values has yet to be proposed. Based on the tests performed in this work and current state-of-the-art techniques used in production in real-time rendering, further research into robust specular denoising algorithms would be beneficial

    Adaptive Monte Carlo Localization in ROS

    Get PDF
    The purpose of this work was to gain insight into the world of robot localization and to understand the characteristics of the algorithms widely used for this task. The algorithm chosen for inspection was the Adaptive Monte Carlo Localization (AMCL) algorithm. AMCL is one of the most popular algorithms used for robot localization. AMCL is a probabilistic algorithm that uses a particle filter to estimate the current location and orientation of the robot. The algorithm starts with an initial belief of the robot's pose's probability distribution, which is represented by particles that are distributed according to such belief. As the robot moves, the particles are propagated according to the robot's motion model. Data from the robot's odometry sensors and range finders are used to evaluate the particles based on how likely it is to obtain such sensor readings at the current pose estimate. The particles are then resampled, with the higher-ranked particles having a higher likelihood of being sampled. Eventually, the lower likelihood particles will disappear and the algorithm will converge towards a single cluster of higher likelihood particles. If localization is successful, this would be near the true pose of the robot. The AMCL algorithm was tested using a randomly selected subset of the HouseExpo dataset. The HouseExpo dataset contains 2D binary maps of indoor areas. The algorithm was tested in the specific case of global localization, where the initial particle filter estimate is a uniform distribution over the entire map. The tests were run using Gazebo for simulating the robot, ROS for communication and controlling the robot and Matlab for running the AMCL algorithm and visualizing the particle filter. The localization process was done 10 times for each map, for a maximum of 100 particle filter updates. The runs were evaluated based on whether the AMCL algorithm converged near the true position of the robot and how many steps it took until convergence. Overall the AMCL algorithm performed quite well, converging near the true position of the robot 878 times out of 1000 test runs. For 53 of the 100 maps, the algorithm converged to the correct position 10 times out of the 10 test runs. The number of rooms and number of vertices per map doesn't seem to significantly affect whether the algorithm converges to the correct position or not. The test results indicate a slight increase in the average amount of steps required until convergence as the number of rooms increases. Symmetric areas and a high amount of similar-looking rooms seem to cause the most problems for the algorithm
    corecore