448 research outputs found

    A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution

    Get PDF
    Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.</jats:p

    Apparent sharpness of 3D video when one eye's view is more blurry.

    Get PDF
    When the images presented to each eye differ in sharpness, the fused percept remains relatively sharp. Here, we measure this effect by showing stereoscopic videos that have been blurred for one eye, or both eyes, and psychophysically determining when they appear equally sharp. For a range of blur magnitudes, the fused percept always appeared significantly sharper than the blurrier view. From these data, we investigate to what extent discarding high spatial frequencies from just one eye's view reduces the bandwidth necessary to transmit perceptually sharp 3D content. We conclude that relatively high-resolution video transmission has the most potential benefit from this method

    Temporal Properties of Liquid Crystal Displays: Implications for Vision Science Experiments

    Get PDF
    Liquid crystal displays (LCD) are currently replacing the previously dominant cathode ray tubes (CRT) in most vision science applications. While the properties of the CRT technology are widely known among vision scientists, the photometric and temporal properties of LCDs are unfamiliar to many practitioners. We provide the essential theory, present measurements to assess the temporal properties of different LCD panel types, and identify the main determinants of the photometric output. Our measurements demonstrate that the specifications of the manufacturers are insufficient for proper display selection and control for most purposes. Furthermore, we show how several novel display technologies developed to improve fast transitions or the appearance of moving objects may be accompanied by side–effects in some areas of vision research. Finally, we unveil a number of surprising technical deficiencies. The use of LCDs may cause problems in several areas in vision science. Aside from the well–known issue of motion blur, the main problems are the lack of reliable and precise onsets and offsets of displayed stimuli, several undesirable and uncontrolled components of the photometric output, and input lags which make LCDs problematic for real–time applications. As a result, LCDs require extensive individual measurements prior to applications in vision science

    Quality Assessment for CRT and LCD Color Reproduction Using a Blind Metric

    Get PDF
    This paper deals with image quality assessment that is capturing the focus of several research teams from academic and industrial parts. This field has an important role in various applications related to image from acquisition to projection. A large numbers of objective image quality metrics have been developed during the last decade. These metrics are more or less correlated to end-user feedback and can be separated in three categories: 1) Full Reference (FR) trying to evaluate the impairment in comparison to the reference image, 2) Reduced Reference (RR) using some features extracted from an image to represent it and compare it with the distorted one and 3) No Reference (NR) measures known as distortions such as blockiness, blurriness,. . .without the use of a reference. Unfortunately, the quality assessment community have not achieved a universal image quality model and only empiricalmodels established on psychophysical experimentation are generally used. In this paper, we focus only on the third category to evaluate the quality of CRT (Cathode Ray Tube) and LCD (Liquid Crystal Display) color reproduction where a blind metric is, based on modeling a part of the human visual system behavior. The objective results are validated by single-media and cross-media subjective tests. This allows to study the ability of simulating displays on a reference one

    Visual perception of digital holograms on autostereoscopic displays

    Get PDF
    In digital holography we often capture optically a 3D scene and reconstruct the perspectives numerically. The reconstructions are routinely in the form of a 2D image slice, an extended focus image, or a depth map from a single perspective. These are fundamentally 2D (or at most 2.5D) representations and for some scenes are not certain to give the human viewer a clear perception of the 3D features encoded in the hologram (occlusions are not overcome, for example). As an intermediate measure towards a full-field optoelectronic display device, we propose to digitally process the holograms to allow them to be displayed on conventional autostereoscopic displays

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Spatio-Velocity CSF as a Function of Retinal Velocity Using Unstabilized Stimuli

    Get PDF
    LCD televisions have LC response times and hold-type data cycles that contribute to the appearance of blur when objects are in motion on the screen. New algorithms based on studies of the human visual system\u27s sensitivity to motion are being developed to compensate for these artifacts. This paper describes a series of experiments that incorporate eyetracking in the psychophysical determination of spatio-velocity contrast sensitivity in order to build on the 2D spatiovelocity contrast sensitivity function (CSF) model first described by Kelly and later refined by Daly. We explore whether the velocity of the eye has an additional effect on sensitivity and whether the model can be used to predict sensitivity to more complex stimuli. There were a total of five experiments performed in this research. The first four experiments utilized Gabor patterns with three different spatial and temporal frequencies and were used to investigate and/or populate the 2D spatio-velocity CSF. The fifth experiment utilized a disembodied edge and was used to validate the model. All experiments used a two interval forced choice (2IFC) method of constant stimuli guided by a QUEST routine to determine thresholds. The results showed that sensitivity to motion was determined by the retinal velocity produced by the Gabor patterns regardless of the type of motion of the eye. Based on the results of these experiments the parameters for the spatio-velocity CSF model were optimized to our experimental conditions
    • …
    corecore