92 research outputs found

    A perceptual approach for stereoscopic rendering optimization

    Get PDF
    Cataloged from PDF version of article.The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately: which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering. (C) 2009 Elsevier Ltd. All rights reserved

    A framework for applying the principles of depth perception to information visualization

    Get PDF
    Cataloged from PDF version of article.During the visualization of 3D content, using the depth cues selectively to support the design goals and enabling a user to perceive the spatial relationships between the objects are important concerns. In this novel solution, we automate this process by proposing a framework that determines important depth cues for the input scene and the rendering methods to provide these cues. While determining the importance of the cues, we consider the user's tasks and the scene's spatial layout. The importance of each depth cue is calculated using a fuzzy logic-based decision system. Then, suitable rendering methods that provide the important cues are selected by performing a cost-profit analysis on the rendering costs of the methods and their contribution to depth perception. Possible cue conflicts are considered and handled in the system. We also provide formal experimental studies designed for several visualization tasks. A statistical analysis of the experiments verifies the success of our framework

    Perceptual 3D rendering based on principles of analytical cubism

    Get PDF
    Cataloged from PDF version of article.Cubism, pioneered by Pablo Picasso and Georges Braque, was a breakthrough in art, influencing artists to abandon existing traditions. In this paper, we present a novel approach for cubist rendering of 3D synthetic environments. Rather than merely imitating cubist paintings, we apply the main principles of analytical cubism to 3D graphics rendering. In this respect, we develop a new cubist camera providing an extended view, and a perceptually based spatial imprecision technique that keeps the important regions of the scene within a certain area of the output. Additionally, several methods to provide a painterly style are applied. We demonstrate the effectiveness of our extending view method by comparing the visible face counts in the images rendered by the cubist camera model and the traditional perspective camera. Besides, we give an overall discussion of final results and apply user tests in which users compare our results very well with analytical cubist paintings but not synthetic cubist paintings. (c) 2012 Elsevier Ltd. All rights reserved

    Three-dimensional media for mobile devices

    Get PDF
    Cataloged from PDF version of article.This paper aims at providing an overview of the core technologies enabling the delivery of 3-D Media to next-generation mobile devices. To succeed in the design of the corresponding system, a profound knowledge about the human visual system and the visual cues that form the perception of depth, combined with understanding of the user requirements for designing user experience for mobile 3-D media, are required. These aspects are addressed first and related with the critical parts of the generic system within a novel user-centered research framework. Next-generation mobile devices are characterized through their portable 3-D displays, as those are considered critical for enabling a genuine 3-D experience on mobiles. Quality of 3-D content is emphasized as the most important factor for the adoption of the new technology. Quality is characterized through the most typical, 3-D-specific visual artifacts on portable 3-D displays and through subjective tests addressing the acceptance and satisfaction of different 3-D video representation, coding, and transmission methods. An emphasis is put on 3-D video broadcast over digital video broadcasting-handheld (DVB-H) in order to illustrate the importance of the joint source-channel optimization of 3-D video for its efficient compression and robust transmission over error-prone channels. The comparative results obtained identify the best coding and transmission approaches and enlighten the interaction between video quality and depth perception along with the influence of the context of media use. Finally, the paper speculates on the role and place of 3-D multimedia mobile devices in the future internet continuum involving the users in cocreation and refining of rich 3-D media content

    A unified graphics rendering pipeline for autostereoscopic rendering

    Get PDF
    Autostereoscopic displays require rendering a scene from multiple viewpoints. The architecture of current-generation graphics processors are still grounded in the historic evolution of monoscopic rendering. In this paper, we present a novel programmable rendering pipeline that renders to multiple viewpoints in a single pass. Our approach leverages on the computational and memory fetch coherence of rendering to multiple viewpoints to achieve significant speedup. We present an emulation of the principles of our pipeline using the current-generation GPUs and present a quantitative estimate of the benefits of our approach. We make a case for the new rendering pipeline by demonstrating its benefits for a range of applications such as autostereoscopic rendering and for shadow map computation for a scene with multiple light sources. © 2007 IEEE

    Nonverbal communication interface for collaborative virtual environments

    Get PDF
    Nonverbal communication is an important aspect of real-life face-to-face interaction and one of the most efficient ways to convey emotions, therefore users should be provided the means to replicate it in the virtual world. Because articulated embodiments are well suited to provide body communication in virtual environments, this paper first reviews some of the advantages and disadvantages of complex embodiments. After a brief introduction to nonverbal communication theories, we present our solution, taking into account the practical limitations of input devices and social science aspects. We introduce our sample of actions and implementation using our VLNET (Virtual Life Network) networked virtual environment and discuss the results of an informal evaluation experimen

    A color-based face tracking algorithm for enhancing interaction with mobile devices

    Get PDF
    A color-based face tracking algorithm is proposed to be used as a human-computer interaction tool on mobile devices. The solution provides a natural means of interaction enabling a motion parallax effect in applications. The algorithm considers the characteristics of mobile useconstrained computational resources and varying environmental conditions. The solution is based on color comparisons and works on images gathered from the front camera of a device. In addition to color comparisons, the coherency of the facial pixels is considered in the algorithm. Several applications are also demonstrated in this work, which use the face position to determine the viewpoint in a virtual scene, or for browsing large images. The accuracy of the system is tested under different environmental conditions such as lighting and background, and the performance of the system is measured in different types of mobile devices. According to these measurements the system allows for accurate (7% RMS error) face tracking in real time (20-100 fps). © Springer-Verlag 2010

    A framework for enhancing depth perception in computer graphics

    Get PDF
    This paper introduces a solution for enhancing depth perception in a given 3D computer-generated scene. For this purpose, we propose a framework that decides on the suitable depth cues for a given scene and the rendering methods which provide these cues. First, the system calculates the importance of each depth cue using a fuzzy logic based algorithm which considers the target tasks in the application and the spatial layout of the scene. Then, a knapsack model is constructed to keep the balance between the rendering costs of the graphical methods that provide these cues and their contibution to depth perception. This cost-profit analysis step selects the proper rendering methods. In this work, we also present several objective and subjective experiments which show that our automated depth enhancement system is statistically (p < 0.05) better than the other method selection techniques that are tested. © 2010 ACM

    A decision theoretic approach to motion saliency in computer animations

    Get PDF
    We describe a model to calculate saliency of objects due to their motions. In a decision-theoretic fashion, perceptually significant objects inside a scene are detected. The work is based on psychological studies and findings on motion perception. By considering motion cues and attributes, we define six motion states. For each object in a scene, an individual saliency value is calculated considering its current motion state and the inhibition of return principle. Furthermore, a global saliency value is considered for each object by covering their relationships with each other and equivalence of their saliency value. The position of the object with highest attention value is predicted as a possible gaze point for each frame in the animation. We conducted several eye-tracking experiments to practically observe the motion-attention related principles in psychology literature. We also performed some final user studies to evaluate our model and its effectiveness. © 2011 Springer-Verlag

    A clustering-based method to estimate saliency in 3D animated meshes

    Get PDF
    We present a model to determine the perceptually significant elements in animated 3D scenes using a motion-saliency method. Our model clusters vertices with similar motion-related behaviors. To find these similarities, for each frame of an animated mesh sequence, vertices' motion properties are analyzed and clustered using a Gestalt approach. Each cluster is analyzed as a single unit and representative vertices of each cluster are used to extract the motion-saliency values of each group. We evaluate our method by performing an eye-tracker-based user study in which we analyze observers' reactions to vertices with high and low saliencies. The experiment results verify that our proposed model correctly detects the regions of interest in each frame of an animated mesh. © 2014 Elsevier Ltd
    corecore