251 research outputs found

    Visual saliency guided textured model simplification

    Get PDF
    Mesh geometry can be used to model both object shape and details. If texture maps are involved, it is common to let mesh geometry mainly model object shapes and let the texture maps model the most object details, optimising data size and complexity of an object. To support efficient object rendering and transmission, model simplification can be applied to reduce the modelling data. However, existing methods do not well consider how object features are jointly represented by mesh geometry and texture maps, having problems in identifying and preserving important features for simplified objects. To address this, we propose a visual saliency detection method for simplifying textured 3D models. We produce good simplification results by jointly processing mesh geometry and texture map to produce a unified saliency map for identifying visually important object features. Results show that our method offers a better object rendering quality than existing methods

    Study and Comparison of Surface Roughness Measurements

    No full text
    National audienceThis survey paper focus on recent researches whose goal is to optimize treatments on 3D meshes, thanks to a study of their surface features, and more precisely their roughness and saliency. Applications like watermarking or lossy compression can benefit from a precise roughness detection, to better hide the watermarks or quantize coarsely these areas, without altering visually the shape. Despite investigations on scale dependence leading to multi-scale approaches, an accurate roughness or pattern characterization is still lacking, but challenging for those treatments. We think there is still room for investigations that could benefit from the power of the wavelet analysis or the fractal models. Furthermore only few works are now able to differentiate roughness from saliency, though it is essential for faithfully simplifying or denoising a 3D mesh. Hence we have investigated roughness quantification methods for analog surfaces, in several domains of physics. Some roughness parameters used in these fields and the additionnal information they bring are finally studied, since we think an adaptation for 3D meshes could be beneficial

    Intelligent visual media processing: when graphics meets vision

    Get PDF
    The computer graphics and computer vision communities have been working closely together in recent years, and a variety of algorithms and applications have been developed to analyze and manipulate the visual media around us. There are three major driving forces behind this phenomenon: i) the availability of big data from the Internet has created a demand for dealing with the ever increasing, vast amount of resources; ii) powerful processing tools, such as deep neural networks, provide e�ective ways for learning how to deal with heterogeneous visual data; iii) new data capture devices, such as the Kinect, bridge between algorithms for 2D image understanding and 3D model analysis. These driving forces have emerged only recently, and we believe that the computer graphics and computer vision communities are still in the beginning of their honeymoon phase. In this work we survey recent research on how computer vision techniques bene�t computer graphics techniques and vice versa, and cover research on analysis, manipulation, synthesis, and interaction. We also discuss existing problems and suggest possible further research directions

    CompenNet++: End-to-end Full Projector Compensation

    Full text link
    Full projector compensation aims to modify a projector input image such that it can compensate for both geometric and photometric disturbance of the projection surface. Traditional methods usually solve the two parts separately, although they are known to correlate with each other. In this paper, we propose the first end-to-end solution, named CompenNet++, to solve the two problems jointly. Our work non-trivially extends CompenNet, which was recently proposed for photometric compensation with promising performance. First, we propose a novel geometric correction subnet, which is designed with a cascaded coarse-to-fine structure to learn the sampling grid directly from photometric sampling images. Second, by concatenating the geometric correction subset with CompenNet, CompenNet++ accomplishes full projector compensation and is end-to-end trainable. Third, after training, we significantly simplify both geometric and photometric compensation parts, and hence largely improves the running time efficiency. Moreover, we construct the first setup-independent full compensation benchmark to facilitate the study on this topic. In our thorough experiments, our method shows clear advantages over previous arts with promising compensation quality and meanwhile being practically convenient.Comment: To appear in ICCV 2019. High-res supplementary material: https://www3.cs.stonybrook.edu/~hling/publication/CompenNet++_sup-high-res.pdf. Code: https://github.com/BingyaoHuang/CompenNet-plusplu

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
    • …
    corecore