45 research outputs found

    Recovering Stereo Pairs from Anaglyphs

    Get PDF
    International audienceAn anaglyph is a single image created by selecting complementary colors from a stereo color pair; the user can perceive depth by viewing it through color-filtered glasses. We propose a technique to reconstruct the original color stereo pair given such an anaglyph. We modified SIFT-Flow and use it to initially match the different color channels across the two views. Our technique then iteratively refines the matches, selects the good matches (which defines the "anchor" colors), and propagates the anchor colors. We use a diffusion-based technique for the color propagation, and added a step to suppress unwanted colors. Results on a variety of inputs demonstrate the robustness of our technique. We also extended our method to anaglyph videos by using optic flow between time frames

    Stereo Computation for a Single Mixture Image

    Full text link
    This paper proposes an original problem of \emph{stereo computation from a single mixture image}-- a challenging problem that had not been researched before. The goal is to separate (\ie, unmix) a single mixture image into two constitute image layers, such that the two layers form a left-right stereo image pair, from which a valid disparity map can be recovered. This is a severely illposed problem, from one input image one effectively aims to recover three (\ie, left image, right image and a disparity map). In this work we give a novel deep-learning based solution, by jointly solving the two subtasks of image layer separation as well as stereo matching. Training our deep net is a simple task, as it does not need to have disparity maps. Extensive experiments demonstrate the efficacy of our method.Comment: Accepted by European Conference on Computer Vision (ECCV) 201

    Reprocessing anaglyph images

    Full text link

    Novel haptic interface For viewing 3D images

    Get PDF
    In recent years there has been an explosion of devices and systems capable of displaying stereoscopic 3D images. While these systems provide an improved experience over traditional bidimensional displays they often fall short on user immersion. Usually these systems only improve depth perception by relying on the stereopsis phenomenon. We propose a system that improves the user experience and immersion by having a position dependent rendering of the scene and the ability to touch the scene. This system uses depth maps to represent the geometry of the scene. Depth maps can be easily obtained on the rendering process or can be derived from the binocular-stereo images by calculating their horizontal disparity. This geometry is then used as an input to be rendered in a 3D display, do the haptic rendering calculations and have a position depending render of the scene. The author presents two main contributions. First, since the haptic devices have a finite work space and limited resolution, we used what we call detail mapping algorithms. These algorithms compress geometry information contained in a depth map, by reducing the contrast among pixels, in such a way that it can be rendered into a limited resolution display medium without losing any detail. Second, the unique combination of a depth camera as a motion capturing system, a 3D display and haptic device to enhance user experience. While developing this system we put special attention on the cost and availability of the hardware. We decided to use only off-the-shelf, mass consumer oriented hardware so our experiments can be easily implemented and replicated. As an additional benefit the total cost of the hardware did not exceed the one thousand dollars mark making it affordable for many individuals and institutions

    Depth Map Estimation and Colorization of Anaglyph Images Using Local Color Prior and Reverse Intensity Distribution

    Get PDF
    In this paper, we present a joint iterative anaglyph stereo matching and colorization framework for obtaining a set of disparity maps and colorized images. Conventional stereo matching algorithms fail when addressing anaglyph images that do not have similar intensities on their two respective view images. To resolve this problem, we propose two novel data costs using local color prior and reverse intensity distribution factor for obtaining accurate depth maps. To colorize an anaglyph image, each pixel in one view is warped to another view using the obtained disparity values of non-occluded regions. A colorization algorithm using optimization is then employed with additional constraint to colorize the remaining occluded regions. Experimental results confirm that the proposed unified framework is robust and produces accurate depth maps and colorized stereo images.National Research Foundation of Korea (Basic Science Research Program (Ministry of Education, NRF-2012R1A1A2009495))National Research Foundation of Korea (Korea government (MSIP), grant No. NRF-2013R1A2A2A01069181

    Temporally Coherent Video De-Anaglyph

    Get PDF
    Talk and Poster at SIGGRAPH 2014For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos. Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results. In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang's approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks

    Learned Dual-View Reflection Removal

    Get PDF
    Traditional reflection removal algorithms either use a single image as input, which suffers from intrinsic ambiguities, or use multiple images from a moving camera, which is inconvenient for users. We instead propose a learning-based dereflection algorithm that uses stereo images as input. This is an effective trade-off between the two extremes: the parallax between two views provides cues to remove reflections, and two views are easy to capture due to the adoption of stereo cameras in smartphones. Our model consists of a learning-based reflection-invariant flow model for dual-view registration, and a learned synthesis model for combining aligned image pairs. Because no dataset for dual-view reflection removal exists, we render a synthetic dataset of dual-views with and without reflections for use in training. Our evaluation on an additional real-world dataset of stereo pairs shows that our algorithm outperforms existing single-image and multi-image dereflection approaches.Comment: http://sniklaus.com/dualre

    Epipolar image rectification through geometric algorithms with unknown parameters

    Full text link
    Herráez Boquera, J., Denia Rios, J.L., Navarro Esteve, P.J., Rodríguez Pereña, J., Martín Sánchez M.T."Epipolar image rectification through geometric algorithms with unknown parameters". JJ. Electron. Imaging. 22(4), 043021 (Dec 02, 2013). © (2013) Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/1.JEI.22.4.043021Image processing in photogrammetry is commonly used for scene reconstruction. Although two-dimensional applications can be solved using isolated images, reconstruction of three-dimensional scenes usually requires the use of multiple images simultaneously. Epipolar image rectification is a common technique for this purpose. It typically requires internal orientation parameters and, therefore, knowledge of camera calibration and relative orientation parameters between images. A reparameterization of the fundamental matrix through a completely geometric algorithm of seven parameters that enables the epipolar image rectification of a photogrammetric stereo pair without introducing any orientation parameters and without premarking ground control points is presented. The algorithm enables the generation of different stereoscopic models with a single photogrammetric pair from unknown cameras, scanned from a book, or frames from video sequences. Stereoscopic models with no parallaxes have been obtained with a standard deviation of <0.5 pixels. (C) 2013 SPIE and IS&THerráez Boquera, J.; Denia Rios, JL.; Navarro Esteve, PJ.; Rodríguez Pereña, J.; Martín Sánchez, MT. (2013). Epipolar image rectification through geometric algorithms with unknown parameters. Journal of Electronic Imaging. 22(4). doi:10.1117/1.JEI.22.4.04302104302122
    corecore