663 research outputs found

    Glasgow's Stereo Image Database of Garments

    Full text link
    To provide insight into cloth perception and manipulation with an active binocular robotic vision system, we compiled a database of 80 stereo-pair colour images with corresponding horizontal and vertical disparity maps and mask annotations, for 3D garment point cloud rendering has been created and released. The stereo-image garment database is part of research conducted under the EU-FP7 Clothes Perception and Manipulation (CloPeMa) project and belongs to a wider database collection released through CloPeMa (www.clopema.eu). This database is based on 16 different off-the-shelve garments. Each garment has been imaged in five different pose configurations on the project's binocular robot head. A full copy of the database is made available for scientific research only at https://sites.google.com/site/ugstereodatabase/.Comment: 7 pages, 6 figure, image databas

    Highlight microdisparity for improved gloss depiction

    Get PDF

    A perceptual approach for stereoscopic rendering optimization

    Get PDF
    Cataloged from PDF version of article.The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately: which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering. (C) 2009 Elsevier Ltd. All rights reserved

    Efficient rendering for three-dimensional displays

    Get PDF
    This thesis explores more efficient methods for visualizing point data sets on three-dimensional (3D) displays. Point data sets are used in many scientific applications, e.g. cosmological simulations. Visualizing these data sets in {3D} is desirable because it can more readily reveal structure and unknown phenomena. However, cutting-edge scientific point data sets are very large and producing/rendering even a single image is expensive. Furthermore, current literature suggests that the ideal number of views for 3D (multiview) displays can be in the hundreds, which compounds the costs. The accepted notion that many views are required for {3D} displays is challenged by carrying out a novel human factor trials study. The results suggest that humans are actually surprisingly insensitive to the number of viewpoints with regard to their task performance, when occlusion in the scene is not a dominant factor. Existing stereoscopic rendering algorithms can have high set-up costs which limits their use and none are tuned for uncorrelated {3D} point rendering. This thesis shows that it is possible to improve rendering speeds for a low number of views by perspective reprojection. The novelty in the approach described lies in delaying the reprojection and generation of the viewpoints until the fragment stage of the pipeline and streamlining the rendering pipeline for points only. Theoretical analysis suggests a fragment reprojection scheme will render at least 2.8 times faster than na\"{i}vely re-rendering the scene from multiple viewpoints. Building upon the fragment reprojection technique, further rendering performance is shown to be possible (at the cost of some rendering accuracy) by restricting the amount of reprojection required according to the stereoscopic resolution of the display. A significant benefit is that the scene depth can be mapped arbitrarily to the perceived depth range of the display at no extra cost than a single region mapping approach. Using an average case-study (rendering from a 500k points for a 9-view High Definition 3D display), theoretical analysis suggests that this new approach is capable of twice the performance gains than simply reprojecting every single fragment, and quantitative measures show the algorithm to be 5 times faster than a naïve rendering approach. Further detailed quantitative results, under varying scenarios, are provided and discussed

    MegaParallax: Casual 360° Panoramas with Motion Parallax

    Get PDF

    A 3D reconstruction from real-time stereoscopic images using GPU

    No full text
    IEEE Xplore Compliant Files 979-10-92279-01-6International audienceIn this article we propose a new technique to obtain a three-dimensional (3D) reconstruction from stereoscopic images taken by a stereoscopic system in real-time. To parallelize the 3D reconstruction we propose a method that uses a Graphics Processors Unit (GPU) and a disparity map from block matching algorithm (BM). The results obtained permit us to accelerate the images processing time, measured in frames per second (FPS) with respect to the same method using a Central Processing Unit (CPU). The advantage of speed using GPU advocates our system for practical applications such as aerial reconnaissance, cartography, robotic navigation and obstacle detection

    Synthetic content generation for auto-stereoscopic displays

    Get PDF
    Due to the appearance of auto-stereoscopic visualization as one of the most emerging tendencies used in displays, new content generation techniques for this kind of visualization are required. In this paper we present a study for the generation of multi-view synthetic content, studying several camera setups (planar, cylindrical and hyperbolic) and their configurations. We discuss the different effects obtained varying the parameters of these setups. A study with several users was made to analyze visual perceptions, asking them for their optimal visualization. To create the virtual content, a multi-view system has been integrated in a powerful game engine, which allows us to use the latest graphics hardware advances. This integration is detailed and several demos and videos are attached with this paper, which represent a virtual world for auto-stereoscopic displays and the same scenario in a two-view anaglyph representation for being visualized in any conventional display. In all these demos, the parameters studied can be modified offering the possibility of easily appreciate their effects in a virtual scene
    corecore