27,396 research outputs found

    Generation and Rendering of Interactive Ground Vegetation for Real-Time Testing and Validation of Computer Vision Algorithms

    Get PDF
    During the development process of new algorithms for computer vision applications, testing and evaluation in real outdoor environments is time-consuming and often difficult to realize. Thus, the use of artificial testing environments is a flexible and cost-efficient alternative. As a result, the development of new techniques for simulating natural, dynamic environments is essential for real-time virtual reality applications, which are commonly known as Virtual Testbeds. Since the first basic usage of Virtual Testbeds several years ago, the image quality of virtual environments has almost reached a level close to photorealism even in real-time due to new rendering approaches and increasing processing power of current graphics hardware. Because of that, Virtual Testbeds can recently be applied in application areas like computer vision, that strongly rely on realistic scene representations. The realistic rendering of natural outdoor scenes has become increasingly important in many application areas, but computer simulated scenes often differ considerably from real-world environments, especially regarding interactive ground vegetation. In this article, we introduce a novel ground vegetation rendering approach, that is capable of generating large scenes with realistic appearance and excellent performance. Our approach features wind animation, as well as object-to-grass interaction and delivers realistically appearing grass and shrubs at all distances and from all viewing angles. This greatly improves immersion, as well as acceptance, especially in virtual training applications. Nevertheless, the rendered results also fulfill important requirements for the computer vision aspect, like plausible geometry representation of the vegetation, as well as its consistence during the entire simulation. Feature detection and matching algorithms are applied to our approach in localization scenarios of mobile robots in natural outdoor environments. We will show how the quality of computer vision algorithms is influenced by highly detailed, dynamic environments, like observed in unstructured, real-world outdoor scenes with wind and object-to-vegetation interaction

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    Reproducing the Chinese Ancient Heroic Figure “Xiang Yu” Using 3D Graphics Technology

    Get PDF
    The ancient country of China has a long history. It has many fascinating historical stories, historical figures, and an elaborate history. For people who want to learn about Chinese history it can often seem overwhelming. I am planning to find an easy and interesting approach for people who are interested in learning about Chinese history. The effective approach to communicate to people could be enhanced visuals depicting the rich his€tory of China. Out of this long history, I decided to reproduce a Chinese ancient heroic figure with 3D graphics tech€nology. I will achieve this by using 3D modeling technology, using nCloth, particles, fur, lighting, and rendering to cre€ate an ancient Chinese hero, and historic environment. I will create 3D animation, and use motion graphics to make the beginning and end pieces for historical scenes. This piece of scene will display a realistic historic story; with two main characters (the hero and his wife) in this project

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie

    Joint Learning of Intrinsic Images and Semantic Segmentation

    Get PDF
    Semantic segmentation of outdoor scenes is problematic when there are variations in imaging conditions. It is known that albedo (reflectance) is invariant to all kinds of illumination effects. Thus, using reflectance images for semantic segmentation task can be favorable. Additionally, not only segmentation may benefit from reflectance, but also segmentation may be useful for reflectance computation. Therefore, in this paper, the tasks of semantic segmentation and intrinsic image decomposition are considered as a combined process by exploring their mutual relationship in a joint fashion. To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation. We analyze the gains of addressing those two problems jointly. Moreover, new cascade CNN architectures for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as single tasks. Furthermore, a dataset of 35K synthetic images of natural environments is created with corresponding albedo and shading (intrinsics), as well as semantic labels (segmentation) assigned to each object/scene. The experiments show that joint learning of intrinsic image decomposition and semantic segmentation is beneficial for both tasks for natural scenes. Dataset and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201
    corecore