115 research outputs found

    Rendering Antialiased Shadows using Warped Variance Shadow Maps

    Get PDF
    Shadows contribute significantly to the perceived realism of an image, and provide an important depth cue. Rendering high quality, antialiased shadows efficiently is a difficult problem. To antialias shadows, it is necessary to compute partial visibilities, but computing these visibilities using existing approaches is often too slow for interactive applications. Shadow maps are a widely used technique for real-time shadow rendering. One major drawback of shadow maps is aliasing, because the shadow map data cannot be filtered in the same way as colour textures. In this thesis, I present variance shadow maps (VSMs). Variance shadow maps use a linear representation of the depth distributions in the shadow map, which enables the use of standard linear texture filtering algorithms. Thus VSMs can address the problem of shadow aliasing using the same highly-tuned mechanisms that are available for colour images. Given the mean and variance of the depth distribution, Chebyshev's inequality provides an upper bound on the fraction of a shaded fragment that is occluded, and I show that this bound often provides a good approximation to the true partial occlusion. For more difficult cases, I show that warping the depth distribution can produce multiple bounds, some tighter than others. Based on this insight, I present layered variance shadow maps, a scalable generalization of variance shadow maps that partitions the depth distribution into multiple segments. This reduces or eliminates an artifact - "light bleeding" - that can appear when using the simpler version of variance shadow maps. Additionally, I demonstrate exponential variance shadow maps, which combine moments computed from two exponentially-warped depth distributions. Using this approach, high quality results are produced at a fraction of the storage cost of layered variance shadow maps. These algorithms are easy to implement on current graphics hardware and provide efficient, scalable solutions to the problem of shadow map aliasing

    Fourier Analysis of Correlated Monte Carlo Importance Sampling

    Get PDF
    International audienceFourier analysis is gaining popularity in image synthesis, as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyze convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling-based integrators to unify non-uniform sample distributions (importance sampling) as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier-based tools in performing variance analysis for MC integration. This non-trivial exercise also provides exciting insight into the effects of importance sampling on the convergence rate of estimators because of the introduction or removal of discontinuities. Specifically, we demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy that converges slowest. We propose two simple and practical approaches to limit the impact of discontinuities on the convergence rate of estimators: The first one involves mirroring the integrand to cancel out the effect of boundary discontinuities. This is followed by two novel mirror sampling techniques for MC estimation in this mirrored domain. The second approach improves direct illumination light sampling by smoothing out discontinuities within the domain at the cost of introducing a small amount of bias. Our approaches are simple, practical and can be easily incorporated in production renderers

    Motion Segmentation Aided Super Resolution Image Reconstruction

    Get PDF
    This dissertation addresses Super Resolution (SR) Image Reconstruction focusing on motion segmentation. The main thrust is Information Complexity guided Gaussian Mixture Models (GMMs) for Statistical Background Modeling. In the process of developing our framework we also focus on two other topics; motion trajectories estimation toward global and local scene change detections and image reconstruction to have high resolution (HR) representations of the moving regions. Such a framework is used for dynamic scene understanding and recognition of individuals and threats with the help of the image sequences recorded with either stationary or non-stationary camera systems. We introduce a new technique called Information Complexity guided Statistical Background Modeling. Thus, we successfully employ GMMs, which are optimal with respect to information complexity criteria. Moving objects are segmented out through background subtraction which utilizes the computed background model. This technique produces superior results to competing background modeling strategies. The state-of-the-art SR Image Reconstruction studies combine the information from a set of unremarkably different low resolution (LR) images of static scene to construct an HR representation. The crucial challenge not handled in these studies is accumulating the corresponding information from highly displaced moving objects. In this aspect, a framework of SR Image Reconstruction of the moving objects with such high level of displacements is developed. Our assumption is that LR images are different from each other due to local motion of the objects and the global motion of the scene imposed by non-stationary imaging system. Contrary to traditional SR approaches, we employed several steps. These steps are; the suppression of the global motion, motion segmentation accompanied by background subtraction to extract moving objects, suppression of the local motion of the segmented out regions, and super-resolving accumulated information coming from moving objects rather than the whole scene. This results in a reliable offline SR Image Reconstruction tool which handles several types of dynamic scene changes, compensates the impacts of camera systems, and provides data redundancy through removing the background. The framework proved to be superior to the state-of-the-art algorithms which put no significant effort toward dynamic scene representation of non-stationary camera systems
    • …
    corecore