52 research outputs found

    Selecting texture resolution using a task-specific visibility metric

    Get PDF
    In real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize the resolution of various textures, such as albedo and normal maps

    Purkinje images: Conveying different content for different luminance adaptations in a single image

    Get PDF
    Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. Distances under different observation conditions in this representation are independent between pyramid levels and pixel positions and become matrix multiplications. The optimal pixel colour can be found by inverting a small, per-pixel linear system in real time on a GPU. Finally, two user studies analyze our results in terms of the recognition performance and fidelity with respect to the reference images. Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. © 2014 The Eurographics Association and John Wiley & Sons Ltd

    Rendering Pearlescent Appearance Based on Paint-Composition Modeling

    No full text

    Temporally Coherent Irradiance Caching for High Quality Animation Rendering

    No full text

    A visual perception approach for accurate segmentation of light profiles

    No full text
    In this paper we describe the first industrial prototype to characterize automatically the headlamp beam properties using computer vision. The European commission for transportation provides strict regulations that have to be fulfilled as far as headlamp orientations, luminous and geometrical beam properties are concerned. To test the headlamps, the test system has to be properly aligned to the vehicle in order that the measures achieved on brightness and geometrical beam profile can be reliable. The system we present is composed of two integral subsystems. The first consists of a fixed stereo vision system capable of estimating automatically, in real time and with a very high accuracy, the longitudinal axis of the vehicle while it is approaching the stereo rig. The outcome is used to accurately align the second subsystem with respect to the vehicle. This subsystem is composed of a classic optical projection endowed with a CCD camera used to perform automatically radiometric and geometric assessments of the beam projected by the headlamps. Experiments carried out for both the subsystems prove how the high accuracy achieved by our method makes the prototype compliant with current regulations. It is worth remarking that the technology employed is low cost, thus making our approach suitable for commercial headlight tester

    Render2MPEG: A Perception-based Framework Towards Integrating Rendering and Video Compression

    No full text
    Currently 3D animation rendering and video compression are completely independent processes even if rendered frames are streamed on-the-fly within a client-server platform. In such scenario, which may involve time-varying transmission bandwidths and different display characteristics at the client side, dynamic adjustment of the rendering quality to such requirements can lead to a better use of server resources. In this work, we present a framework where the renderer and MPEG codec are coupled through a straightforward interface that provides precise motion vectors from the rendering side to the codec and perceptual error thresholds for each pixel in the opposite direction. The perceptual error thresholds take into account bandwidth-dependent quantization errors resulting from the lossy compression as well as image content-dependent luminance and spatial contrast masking. The availability of the discrete cosine transform (DCT) coefficients at the codec side enables to use advanced models of the human visual system (HVS) in the perceptual error threshold derivation without incurring any significant cost. Those error thresholds are then used to control the rendering quality and make it well aligned with the compressed stream quality. In our prototype system we use the lightcuts technique developed by Walter et al., which we enhance to handle dynamic image sequences, and an MPEG-2 implementation. Our results clearly demonstrate many advantages of coupling the rendering with video compression in terms of faster rendering. Furthermore, temporally coherent rendering leads to a reduction of temporal artifacts

    Light-Field Appearance Editing Based on Intrinsic Decomposition

    No full text
    • …
    corecore