6 research outputs found

    Efficient depth image compression using accurate depth discontinuity detection and prediction

    Get PDF
    This paper presents a novel depth image compression algorithm for both 3D Television (3DTV) and Free Viewpoint Television (FVTV) services. The proposed scheme adopts the K-means clustering algorithm to segment the depth image into K segments. The resulting segmented image is losslessly compressed and transmitted to the decoder. The depth image is then compressed using a bi-modal block encoder, where the smooth blocks are predicted using direct spatial prediction. On the other hand, blocks containing depth discontinuities are approximated using a novel depth discontinuity predictor. The residual information is then compressed using a lossy compression strategy and transmitted to the receiver. Simulation results indicate that the proposed scheme outperforms the state of the art spatial video coding systems available today such as JPEG and H.264/AVC Intra. Moreover, the proposed scheme manages to outperform specialized depth image compression algorithms such as the one proposed by Zanuttigh and Cortelazzo.peer-reviewe

    Exploiting color-depth image correlation to improve depth map compression

    Get PDF
    The multimedia signal processing community has recently identified the need to design depth map compression algorithms which preserve depth discontinuities in order to improve the rendering quality of virtual views for Free Viewpoint Video (FVV) services. This paper adopts contour detection with surround suppression on the color video to approximate the foreground edges present in the depth image. Displacement estimation and compensation is then used to improve this prediction and reduce the amount of side information required by the decoder. Simulation results indicate that the proposed method manages to accurately predict around 64% of the blocks. Moreover, the proposed scheme achieves a Peak Signal-to-Noise Ratio (PSNR) gain of around 4.9-6.6 dB relative to the JPEG standard and manages to outperform other state of the art depth map compression algorithms found in literature.peer-reviewe

    Exploiting color-depth image correlation to improve depth map compression

    Full text link

    Improved depth coding for HEVC focusing on depth edge approximation

    Get PDF
    The latest High Efficiency Video Coding (HEVC) standard has greatly improved the coding efficiency compared to its predecessor H.264. An important share of which is the adoption of hierarchical block partitioning structures and an extended number of modes. The structure of existing inter-modes is appropriate mainly to handle the rectangular and square aligned motion patterns. However, they could not be suitable for the block partitioning of depth objects having partial foreground motion with irregular edges and background. In such cases, the HEVC reference test model (HM) normally explores finer level block partitioning that requires more bits and encoding time to compensate large residuals. Since motion detection is the underlying criteria for mode selection, in this work, we use the energy concentration ratio feature of phase correlation to capture different types of motion in depth object. For better motion modeling focusing at depth edges, the proposed technique also uses an extra pattern mode comprising a group of templates with various rectangular and non-rectangular object shapes and edges. As the pattern mode could save bits by encoding only the foreground areas and beat all other inter-modes in a block once selected, the proposed technique could improve the rate-distortion performance. It could also reduce encoding time by skipping further branching using the pattern mode and selecting a subset of modes using innovative pre-processing criteria. Experimentally it could save 29% average encoding time and improve 0.10 dB Bjontegaard Delta peak signal-to-noise ratio compared to the HM

    Depth Video Coding Using Adaptive Geometry Based Intra Prediction for 3-D Video Systems

    No full text

    Depth-Map Image Compression Based on Region and Contour Modeling

    Get PDF
    In this thesis, the problem of depth-map image compression is treated. The compilation of articles included in the thesis provides methodological contributions in the fields of lossless and lossy compression of depth-map images.The first group of methods addresses the lossless compression problem. The introduced methods are using the approach of representing the depth-map image in terms of regions and contours. In the depth-map image, a segmentation defines the regions, by grouping pixels having similar properties, and separates them using (region) contours. The depth-map image is encoded by the contours and the auxiliary information needed to reconstruct the depth values in each region.One way of encoding the contours is to describe them using two matrices of horizontal and vertical contour edges. The matrices are encoded using template context coding where each context tree is optimally pruned. In certain contexts, the contour edges are found deterministically using only the currently available information. Another way of encoding the contours is to describe them as a sequence of contour segments. Each such segment is defined by an anchor (starting) point and a string of contour edges, equivalent to a string of chain-code symbols. Here we propose efficient ways to select and encode the anchor points and to generate contour segments by using a contour crossing point analysis and by imposing rules that help in minimizing the number of anchor points.The regions are reconstructed at the decoder using predictive coding or the piecewise constant model representation. In the first approach, the large constant regions are found and one depth value is encoded for each such region. For the rest of the image, suitable regions are generated by constraining the local variation of the depth level from one pixel to another. The nonlinear predictors selected specifically for each region are combining the results of several linear predictors, each fitting optimally a subset of pixels belonging to the local neighborhood. In the second approach, the depth value of a given region is encoded using the depth values of the neighboring regions already encoded. The natural smoothness of the depth variation and the mutual exclusiveness of the values in neighboring regions are exploited to efficiently predict and encode the current region's depth value.The second group of methods is studying the lossy compression problem. In a first contribution, different segmentations are generated by varying the threshold for the depth local variability. A lossy depth-map image is obtained for each segmentation and is encoded based on predictive coding, quantization and context tree coding. In another contribution, the lossy versions of one image are created either by successively merging the constant regions of the original image, or by iteratively splitting the regions of a template image using horizontal or vertical line segments. Merging and splitting decisions are greedily taken, according to the best slope towards the next point in the rate-distortion curve. An entropy coding algorithm is used to encode each image.We propose also a progressive coding method for coding the sequence of lossy versions of a depth-map image. The bitstream is encoded so that any lossy version of the original image is generated, starting from a very low resolution up to lossless reconstruction. The partitions of the lossy versions into regions are assumed to be nested so that a higher resolution image is obtained by splitting some regions of a lower resolution image. A current image in the sequence is encoded using the a priori information from a previously encoded image: the anchor points are encoded relative to the already encoded contour points; the depth information of the newly resulting regions is recovered using the depth value of the parent region.As a final contribution, the dissertation includes a study of the parameterization of planar models. The quantized heights at three-pixel locations are used to compute the optimal plane for each region. The three-pixel locations are selected so that the distortion due to the approximation of the plane over the region is minimized. The planar model and the piecewise constant model are competing in the merging process, where the two regions to be merged are those ensuring the optimal slope in the rate-distortion curve
    corecore