6,150 research outputs found

    GENERATION OF FORESTS ON TERRAIN WITH DYNAMIC LIGHTING AND SHADOWING

    Get PDF
    The purpose of this research project is to exhibit an efficient method of creating dynamic lighting and shadowing for the generation of forests on terrain. In this research project, I use textures which contain images of trees from a bird’s eye view in order to create a high scale forest. Furthermore, by manipulating the transparency and color of the textures according to the algorithmic calculations of light and shadow on terrain, I provide the functionality of dynamic lighting and shadowing. Finally, by analyzing the OpenGL pipeline, I design my code in order to allow efficient rendering of the forest

    Joint Material and Illumination Estimation from Photo Sets in the Wild

    Get PDF
    Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve

    Master of Science

    Get PDF
    thesisVirtual point lights (VPLs) provide an effective solution to global illumination computation by converting the indirect illumination into direct illumination from many virtual light sources. This approach results in a less noisy image compare to Monte Carlo methods. In addition, the number of VPLs to generate can be specified in advance; therefore, it can be adjusted depending on the scene, desired quality, time budget, and the available computational power. In this thesis, we investigate a new technique that carefully places VPLs for providing improved rendering quality for computing global illumination using VPLs. Our method consists of three different passes. In the first pass, we randomly generate a large number of VPLs in the scene starting from the camera to place them in positions that can contribute to the final rendered image. Then, we remove a considerable number of these VPLs using a Poisson disk sample elimination method to get a subset of VPLs that are uniformly distributed over the part of the scene that is indirectly visible to the camera. The second pass is to estimate the radiant intensity of these VPLs by performing light tracing starting from the original light sources in the scene and scatter the radiance of light rays at a hit-point to the VPLs close to that point. The final pass is rendering the scene, which consists of shading all points in the scene visible to the camera using the original light sources and VPLs
    • …
    corecore