823 research outputs found

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Livrable D2.2 of the PERSEE project : Analyse/Synthese de Texture

    Get PDF
    Livrable D2.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D2.2 du projet. Son titre : Analyse/Synthese de Textur

    Image Restoration using Automatic Damaged Regions Detection and Machine Learning-Based Inpainting Technique

    Get PDF
    In this dissertation we propose two novel image restoration schemes. The first pertains to automatic detection of damaged regions in old photographs and digital images of cracked paintings. In cases when inpainting mask generation cannot be completely automatic, our detection algorithm facilitates precise mask creation, particularly useful for images containing damage that is tedious to annotate or difficult to geometrically define. The main contribution of this dissertation is the development and utilization of a new inpainting technique, region hiding, to repair a single image by training a convolutional neural network on various transformations of that image. Region hiding is also effective in object removal tasks. Lastly, we present a segmentation system for distinguishing glands, stroma, and cells in slide images, in addition to current results, as one component of an ongoing project to aid in colon cancer prognostication

    Image Completion for View Synthesis Using Markov Random Fields and Efficient Belief Propagation

    Full text link
    View synthesis is a process for generating novel views from a scene which has been recorded with a 3-D camera setup. It has important applications in 3-D post-production and 2-D to 3-D conversion. However, a central problem in the generation of novel views lies in the handling of disocclusions. Background content, which was occluded in the original view, may become unveiled in the synthesized view. This leads to missing information in the generated view which has to be filled in a visually plausible manner. We present an inpainting algorithm for disocclusion filling in synthesized views based on Markov random fields and efficient belief propagation. We compare the result to two state-of-the-art algorithms and demonstrate a significant improvement in image quality.Comment: Published version: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=673843

    Fast intra prediction in the transform domain

    Get PDF
    In this paper, we present a fast intra prediction method based on separating the transformed coefficients. The prediction block can be obtained from the transformed and quantized neighboring block generating minimum distortion for each DC and AC coefficients independently. Two prediction methods are proposed, one is full block search prediction (FBSP) and the other is edge based distance prediction (EBDP), that find the best matched transformed coefficients on additional neighboring blocks. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC

    Livrable D4.2 of the PERSEE project : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architecture

    Get PDF
    51Livrable D4.2 du projet ANR PERSEECe rapport a été réalisé dans le cadre du projet ANR PERSEE (n° ANR-09-BLAN-0170). Exactement il correspond au livrable D4.2 du projet. Son titre : Représentation et codage 3D - Rapport intermédiaire - Définitions des softs et architectur
    • 

    corecore