3,821 research outputs found

    Objective Evaluation Criteria for Shooting Quality of Stereo Cameras over Short Distance

    Get PDF
    Stereo cameras are the basic tools used to obtain stereoscopic image pairs, which can lead to truly great image quality. However, some inappropriate shooting conditions may cause discomfort while viewing stereo images. It is therefore considerably necessary to establish the perceptual criteria that can be used to evaluate the shooting quality of stereo cameras. This article proposes objective quality evaluation criteria based on the characteristics of parallel and toed-in camera configurations. Considering the different internal structures and basic shooting principles, this paper focuses on short-distance shooting conditions and establishes assessment criteria for both parallel and toed-in camera configurations. Experimental results show that the proposed evaluation criteria can predict the visual perception of stereoscopic images and effectively evaluate stereoscopic image quality

    Visual Comfort Assessment for Stereoscopic Image Retargeting

    Full text link
    In recent years, visual comfort assessment (VCA) for 3D/stereoscopic content has aroused extensive attention. However, much less work has been done on the perceptual evaluation of stereoscopic image retargeting. In this paper, we first build a Stereoscopic Image Retargeting Database (SIRD), which contains source images and retargeted images produced by four typical stereoscopic retargeting methods. Then, the subjective experiment is conducted to assess four aspects of visual distortion, i.e. visual comfort, image quality, depth quality and the overall quality. Furthermore, we propose a Visual Comfort Assessment metric for Stereoscopic Image Retargeting (VCA-SIR). Based on the characteristics of stereoscopic retargeted images, the proposed model introduces novel features like disparity range, boundary disparity as well as disparity intensity distribution into the assessment model. Experimental results demonstrate that VCA-SIR can achieve high consistency with subjective perception

    Two independent mechanisms for motion-in-depth perception : evidence from individual differences

    Get PDF
    Our forward-facing eyes allow us the advantage of binocular visual information: using the tiny differences between right and left eye views to learn about depth and location in three dimensions. Our visual systems also contain specialized mechanisms to detect motion-in-depth from binocular vision, but the nature of these mechanisms remains controversial. Binocular motion-in-depth perception could theoretically be based on first detecting binocular disparity and then monitoring how it changes over time. The alternative is to monitor the motion in the right and left eye separately and then compare these motion signals. Here we used an individual differences approach to test whether the two sources of information are processed via dissociated mechanisms, and to measure the relative importance of those mechanisms. Our results suggest the existence of two distinct mechanisms, each contributing to the perception of motion-in-depth in most observers. Additionally, for the first time, we demonstrate the relative prevalence of the two mechanisms within a normal population. In general, visual systems appear to rely mostly on the mechanism sensitive to changing binocular disparity, but perception of motion-in-depth is augmented by the presence of a less sensitive mechanism that uses interocular velocity differences. Occasionally, we find observers with the opposite pattern of sensitivity. More generally this work showcases the power of the individual differences approach in studying the functional organization of cognitive systems.Publisher PDFPeer reviewe

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Selective rendering for efficient ray traced stereoscopic images

    Get PDF
    Depth-related visual effects are a key feature of many virtual environments. In stereo-based systems, the depth effect can be produced by delivering frames of disparate image pairs, while in monocular environments, the viewer has to extract this depth information from a single image by examining details such as perspective and shadows. This paper investigates via a number of psychophysical experiments, whether we can reduce computational effort and still achieve perceptually high-quality rendering for stereo imagery. We examined selectively rendering the image pairs by exploiting the fusing capability and depth perception underlying human stereo vision. In ray-tracing-based global illumination systems, a higher image resolution introduces more computation to the rendering process since many more rays need to be traced. We first investigated whether we could utilise the human binocular fusing ability and significantly reduce the resolution of one of the image pairs and yet retain a high perceptual quality under stereo viewing condition. Secondly, we evaluated subjects' performance on a specific visual task that required accurate depth perception. We found that subjects required far fewer rendered depth cues in the stereo viewing environment to perform the task well. Avoiding rendering these detailed cues saved significant computational time. In fact it was possible to achieve a better task performance in the stereo viewing condition at a combined rendering time for the image pairs less than that required for the single monocular image. The outcome of this study suggests that we can produce more efficient stereo images for depth-related visual tasks by selective rendering and exploiting inherent features of human stereo vision
    • …
    corecore