4 research outputs found

    Dynamic sampling rate: harnessing frame coherence in graphics applications for energy-efficient GPUs

    Get PDF
    In real-time rendering, a 3D scene is modelled with meshes of triangles that the GPU projects to the screen. They are discretized by sampling each triangle at regular space intervals to generate fragments which are then added texture and lighting effects by a shader program. Realistic scenes require detailed geometric models, complex shaders, high-resolution displays and high screen refreshing rates, which all come at a great compute time and energy cost. This cost is often dominated by the fragment shader, which runs for each sampled fragment. Conventional GPUs sample the triangles once per pixel; however, there are many screen regions containing low variation that produce identical fragments and could be sampled at lower than pixel-rate with no loss in quality. Additionally, as temporal frame coherence makes consecutive frames very similar, such variations are usually maintained from frame to frame. This work proposes Dynamic Sampling Rate (DSR), a novel hardware mechanism to reduce redundancy and improve the energy efficiency in graphics applications. DSR analyzes the spatial frequencies of the scene once it has been rendered. Then, it leverages the temporal coherence in consecutive frames to decide, for each region of the screen, the lowest sampling rate to employ in the next frame that maintains image quality. We evaluate the performance of a state-of-the-art mobile GPU architecture extended with DSR for a wide variety of applications. Experimental results show that DSR is able to remove most of the redundancy inherent in the color computations at fragment granularity, which brings average speedups of 1.68x and energy savings of 40%.This work has been supported by the the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (Grant No. 833057), Spanish State Research Agency (MCIN/AEI) under Grant PID2020-113172RB-I00, the ICREA Academia program, and the Generalitat de Catalunya under Grant FI-DGR 2016. Funding was provided by Ministerio de Economía, Industria y Competitividad, Gobierno de España (Grant No. TIN2016-75344-R).Peer ReviewedPostprint (published version

    Photorealistic rendering: a survey on evaluation

    Get PDF
    This article is a systematic collection of existing methods and techniques for evaluating rendering category in the field of computer graphics. The motive for doing this study was the difficulty of selecting appropriate methods for evaluating and validating specific results reported by many researchers. This difficulty lies in the availability of numerous methods and lack of robust discussion of them. To approach such problems, the features of well-known methods are critically reviewed to provide researchers with backgrounds on evaluating different styles in photo-realistic rendering part of computer graphics. There are many ways to evaluating a research. For this article, classification and systemization method is use. After reviewing the features of different methods, their future is also discussed. Finally, dome pointers are proposed as to the likely future issues in evaluating the research on realistic rendering. It is expected that this analysis helps researchers to overcome the difficulties of evaluation not only in research, but also in application

    Content-Adaptive Non-Stationary Projector Resolution Enhancement

    Get PDF
    For any projection system, one goal will surely be to maximize the quality of projected imagery at a minimized hardware cost, which is considered a challenging engineering problem. Experience in applying different image filters and enhancements to projected video suggests quite clearly that the quality of a projected enhanced video is very much a function of the content of the video itself. That is, to first order, whether the video contains content which is moving as opposed to still plays an important role in the video quality, since the human visual system tolerates much more blur in moving imagery but at the same time is significantly sensitive to the flickering and aliasing caused by moving sharp textures. Furthermore, the spatial and statistical characteristics of text and non-text images are quite distinct. We would, therefore, assert that the text-like, moving and background pixels of a given video stream should be enhanced differently using class-dependent video enhancement filters to achieve maximum visual quality. In this thesis, we present a novel text-dependent content enhancement scheme, a novel motion-dependent content enhancement scheme and a novel content-adaptive resolution enhancement scheme based on a text-like / non-text-like classification and a pixel-wise moving / non-moving classification, with the actual enhancement obtained via class--dependent Wiener deconvolution filtering. Given an input image, the text and motion detection methods are used to generate binary masks to indicate the location of the text and moving regions in the video stream. Then enhanced images are obtained by applying a plurality of class-dependent enhancement filters, with text-like regions sharpened more than the background and moving regions sharpened less than the background. Later, one or more resulting enhanced images are combined into a composite output image based on the corresponding mask of different features. Finally, a higher resolution projected video stream is conducted by controlling one or more projectors to project the plurality of output frame streams in a rapid overlapping way. Experimental results on the test images and videos show that the proposed schemes all offer improved visual quality over projection without enhancement as well as compared to a recent state-of-the-art enhancement method. Particularly, the proposed content-adaptive resolution enhancement scheme increases the PSNR value by at least 18.2% and decreases MSE value by at least 25%

    Transparency and Anti-Aliasing Techniques for Real-Time Rendering

    No full text
    corecore