3 research outputs found

    Motion parallax for 360° RGBD video

    Get PDF
    We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea

    Depth-Aware Patch-based Image Disocclusion for Virtual View Synthesis

    No full text
    International audienceIn this paper we propose a depth-aided patch based inpainting method to perform the disocclusion of holes that appear when synthesizing virtual views from RGB-D scenes. Depth information is added to each key step of the classical patch-based algorithm from [Criminisi et al. 2004] to guide the synthesis of missing structures and textures. These contributions result in a new inpainting method which is efficient compared to state-of-the-art approaches (both in visual quality and computational burden), while requiring only a single easy-to-adjust additional parameter

    Inpainting basé motif d'images et de vidéos appliqué aux données stéréoscopiques avec carte de profondeur

    Get PDF
    We focus on the study and the enhancement of greedy pattern-based image processing algorithmsfor the specific purpose of inpainting, i.e., the automatic completion of missing data in digitalimages and videos. We first review the state of the art methods in this field and analyze the important steps of prominent greedy algorithms in the literature. Then, we propose a set of changesthat significantly enhance the global geometric coherence of images reconstructed with this kindof algorithms. We also focus on the reduction of the visual bloc artifacts classically appearing inthe reconstruction results. For this purpose, we define a tensor-inspired formalism for fast anisotropic patch blending, guided by the geometry of the local image structures and by the automaticdetection of the artifact locations. We illustrate the improvement of the visual quality brought byour contributions with many examples, and show that we are generic enough to perform similaradaptations to other existing pattern-based inpainting algorithms. Finally, we extend and applyour reconstruction algorithms to stereoscopic image and video data, synthesized with respect tonew virtual camera viewpoints. We incorporate the estimated depth information (available fromthe original stereo pairs) in our inpainting and patch blending formalisms to propose a visuallysatisfactory solution to the non-trivial problem of automatic disocclusion of real resynthesizedstereoscopic scenes.Nous nous intéressons à l'étude et au perfectionnement d'algorithmes de traitement d'image gloutons basés motif, pour traiter le problème général de l'"inpainting", c-à-d la complétion automatique de données manquantes dans les images et les vidéos numériques. Après avoir dressé un état de l'art du domaine et analysé les étapes sensibles des algorithmes gloutons existants dans la littérature, nous proposons, dans un premier temps, un ensemble de modifications améliorant de façon significative la cohérence géométrique globale des images reconstruites par ce type d'algorithmes. Dans un deuxième temps, nous nous focalisons sur la réduction des artefacts visuels de type "bloc" classiquement présents dans les résultats de reconstruction, en proposant un formalisme tensoriel de mélange anisotrope rapide de patchs, guidé par la géométrie des structures locales et par la détection automatique des points de localisation des artefacts. Nous illustrons avec de nombreux exemples que l'ensemble de ces contributions améliore significativement la qualité visuelle des résultats obtenus, tout en restant suffisamment générique pour s'adapter à tous type d'algorithmes d'inpainting basé motif.Pour finir, nous nous concentrons sur l'application et l'adaptation de nos algorithmes de reconstruction sur des données stéréoscopiques (images et vidéos) resynthétisées suivant de nouveaux points de vue virtuels de caméra.Nous intégrons l'information de profondeur estimée (à partir des vues stéréos originales) dans nos méthodes d'inpainting et de mélange de patch pour proposer une solution visuellement satisfaisante au problème difficile de la désoccultation automatique de scènes réelles resynthétisées
    corecore