63,282 research outputs found

    GENERATION OF FORESTS ON TERRAIN WITH DYNAMIC LIGHTING AND SHADOWING

    Get PDF
    The purpose of this research project is to exhibit an efficient method of creating dynamic lighting and shadowing for the generation of forests on terrain. In this research project, I use textures which contain images of trees from a bird’s eye view in order to create a high scale forest. Furthermore, by manipulating the transparency and color of the textures according to the algorithmic calculations of light and shadow on terrain, I provide the functionality of dynamic lighting and shadowing. Finally, by analyzing the OpenGL pipeline, I design my code in order to allow efficient rendering of the forest

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Selective rendering for efficient ray traced stereoscopic images

    Get PDF
    Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects' performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision

    Approaching Visual Search in Photo-Realistic Scenes

    Full text link
    Visual search is extended from the domain of polygonal figures presented on a uniform background to scenes in which search is for a photo-realistic object in a dense, naturalistic background. Scene generation for these displays relies on a powerful solid modeling program to define the three dimensional forms, surface properties, relative positions, and illumination of the objects and a rendering program to produce an image. Search in the presented experiments is for a rock with specific properties among other, similar rocks, although the method described can be generalized to other situations. Using this technique we explore the effects of illumination and shadows in aiding search for a rock in front of and closer to the viewer than other rocks in the scene. For these scenes, shadows of two different contrast levels can significantly deet·ease reaction times for displays in which target rocks are similar to distractor rocks. However, when the target rock is itself easily distinguishable from dis tractors on the basis of form, the presence or absence of shadows has no discernible effect. To relate our findings to those for earlier polygonal displays, we simplified the non-shadow displays so that only boundary information remained. For these simpler displays, search slopes (the reaction time as a function of the number of distractors) were significantly faster, indicating that the more complex photo-realistic objects require more time to process for visual search. In contrast with several previous experiments involving polygonal figures, we found no evidence for an effect of illumination direction on search times

    Dynamic Illumination for Augmented Reality with Real-Time Interaction

    Get PDF
    Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost

    Photo-Realistic Scenes with Cast Shadows Show No Above/Below Search Asymmetries for Illumination Direction

    Full text link
    Visual search is extended from the domain of polygonal figures presented on a uniform field to photo-realistic scenes containing target objects in dense, naturalistic backgrounds. The target in a trial is a computer-rendered rock protruding in depth from a "wall" of rocks of roughly similar size but different shapes. Subjects responded "present" when one rock appeared closer than the rest, owing to occlusions or cast shadows, and "absent" when all rocks appeared to be at the same depth. Results showed that cast shadows can significantly decrease reaction times compared to scenes with no cast shadows, in which the target was revealed only by occlusions of rocks behind it. A control experiment showed that cast shadows can be utilized even for displays involving rocks of several achromatic surface colors (dark through light), in which the shadow cast by the target rock was not the darkest region in the scene. Finally, in contrast with reports of experiments by others involving polygonal figures, we found no evidence for an effect of illumination direction (above vs. below) on search times.Office of Naval Research (N00014-94-1-0597, N00014-95-1-0409

    3D simulation of complex shading affecting PV systems taking benefit from the power of graphics cards developed for the video game industry

    Get PDF
    Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design.Comment: 5 page, 9 figures, conference proceedings, 29th European Photovoltaic Solar Energy Conference and Exhibition, Amsterdam, 201
    • …
    corecore