67 research outputs found

    Perceptual Visibility Model for Temporal Contrast Changes in Periphery

    Get PDF
    Modeling perception is critical for many applications and developments in computer graphics to optimize and evaluate content generation techniques. Most of the work to date has focused on central (foveal) vision. However, this is insufficient for novel wide-field-of-view display devices, such as virtual and augmented reality headsets. Furthermore, the perceptual models proposed for the fovea do not readily extend to the off-center, peripheral visual field, where human perception is drastically different. In this paper, we focus on modeling the temporal aspect of visual perception in the periphery. We present new psychophysical experiments that measure the sensitivity of human observers to different spatio-temporal stimuli across a wide field of view. We use the collected data to build a perceptual model for the visibility of temporal changes at different eccentricities in complex video content. Finally, we discuss, demonstrate, and evaluate several problems that can be addressed using our technique. First, we show how our model enables injecting new content into the periphery without distracting the viewer, and we discuss the link between the model and human attention. Second, we demonstrate how foveated rendering methods can be evaluated and optimized to limit the visibility of temporal aliasing

    Noise-based Enhancement for Foveated Rendering

    Get PDF
    Human visual sensitivity to spatial details declines towards the periphery. Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery, avoiding the synthesis of high-spatial-frequency details that are costly to generate but not perceived by a viewer. However, contemporary techniques do not make a clear distinction between the range of spatial frequencies that must be reproduced and those that can be omitted. For a given eccentricity, there is a range of frequencies that are detectable but not resolvable. While the accurate reproduction of these frequencies is not required, an observer can detect their absence if completely omitted. We use this observation to improve the performance of existing foveated rendering techniques. We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise whose parameters are carefully tuned to image content and human perception. Consequently, these fre- quencies do not have to be synthesized during rendering, allowing more aggressive foveation, and they can be replaced by noise generated in a less expensive post-processing step, leading to improved performance of the ren- dering system. Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration. The method operates on rendering output and runs at rates exceeding 200 FPS at 4K resolution, making it suitable for integration with real-time foveated rendering systems for VR and AR devices. We validate our results and compare them to the existing contrast enhancement technique in user experiments

    Spec2Fab: A reducer-tuner model for translating specifications to 3D prints

    Get PDF
    Multi-material 3D printing allows objects to be composed of complex, heterogenous arrangements of materials. It is often more natural to define a functional goal than to define the material composition of an object. Translating these functional requirements to fabri-cable 3D prints is still an open research problem. Recently, several specific instances of this problem have been explored (e.g., appearance or elastic deformation), but they exist as isolated, monolithic algorithms. In this paper, we propose an abstraction mechanism that simplifies the design, development, implementation, and reuse of these algorithms. Our solution relies on two new data structures: a reducer tree that efficiently parameterizes the space of material assignments and a tuner network that describes the optimization process used to compute material arrangement. We provide an application programming interface for specifying the desired object and for defining parameters for the reducer tree and tuner network. We illustrate the utility of our framework by implementing several fabrication algorithms as well as demonstrating the manufactured results.United States. Defense Advanced Research Projects Agency (N66001-12-1-4242)National Science Foundation (U.S.) (CCF-1138967)reducer-tuner model for translating specifications to 3D prints (IIS-1116296)Google (Firm) (Faculty Research Award

    Highlight microdisparity for improved gloss depiction

    Get PDF

    Gradient-based 2D-to-3D Conversion for Soccer Videos

    Get PDF
    A wide spread adoption of 3D videos and technologies is hindered by the lack of high-quality 3D content. One promising solution to address this problem is to use automated 2D-to-3D conversion. However, current conversion methods, while general, produce low-quality results with artifacts that are not acceptable to many viewers. We address this problem by showing how to construct a high-quality, domain-specific conversion method for soccer videos. We propose a novel, data-driven method that generates stereoscopic frames by transferring depth information from similar frames in a database of 3D stereoscopic videos. Creating a database of 3D stereoscopic videos with accurate depth is, however, very difficult. One of the key findings in this paper is showing that computer generated content in current sports computer games can be used to generate high-quality 3D video reference database for 2D-to-3D conversion methods. Once we retrieve similar 3D video frames, our technique transfers depth gradients to the target frame while respecting object boundaries. It then computes depth maps from the gradients, and generates the output stereoscopic video. We implement our method and validate it by conducting user-studies that evaluate depth perception and visual comfort of the converted 3D videos. We show that our method produces high-quality 3D videos that are almost indistinguishable from videos shot by stereo cameras. In addition, our method significantly outperforms the current state-of-the-art method. For example, up to 20% improvement in the perceived depth is achieved by our method, which translates to improving the mean opinion score from Good to Excellent.Qatar Computing Research Institute-CSAIL PartnershipNational Science Foundation (U.S.) (Grant IIS-1111415
    • …
    corecore