596 research outputs found

    Real-time complexity constrained encoding

    Get PDF
    Complex software appliances can be deployed on hardware with limited available computational resources. This computational boundary puts an additional constraint on software applications. This can be an issue for real-time applications with a fixed time constraint such as low delay video encoding. In the context of High Efficiency Video Coding (HEVC), a limited number of publications have focused on controlling the complexity of an HEVC video encoder. In this paper, a technique is proposed to control complexity by deciding between 2Nx2N merge mode and full encoding, at different Coding Unit (CU) depths. The technique is demonstrated in two encoders. The results demonstrate fast convergence to a given complexity threshold, and a limited loss in rate-distortion performance (on average 2.84% Bjontegaard delta rate for 40% complexity reduction)

    Depth map compression via 3D region-based representation

    Get PDF
    In 3D video, view synthesis is used to create new virtual views between encoded camera views. Errors in the coding of the depth maps introduce geometry inconsistencies in synthesized views. In this paper, a new 3D plane representation of the scene is presented which improves the performance of current standard video codecs in the view synthesis domain. Two image segmentation algorithms are proposed for generating a color and depth segmentation. Using both partitions, depth maps are segmented into regions without sharp discontinuities without having to explicitly signal all depth edges. The resulting regions are represented using a planar model in the 3D world scene. This 3D representation allows an efficient encoding while preserving the 3D characteristics of the scene. The 3D planes open up the possibility to code multiview images with a unique representation.Postprint (author's final draft

    Convex Optimization Based Bit Allocation for Light Field Compression under Weighting and Consistency Constraints

    Full text link
    Compared with conventional image and video, light field images introduce the weight channel, as well as the visual consistency of rendered view, information that has to be taken into account when compressing the pseudo-temporal-sequence (PTS) created from light field images. In this paper, we propose a novel frame level bit allocation framework for PTS coding. A joint model that measures weighted distortion and visual consistency, combined with an iterative encoding system, yields the optimal bit allocation for each frame by solving a convex optimization problem. Experimental results show that the proposed framework is effective in producing desired distortion distribution based on weights, and achieves up to 24.7% BD-rate reduction comparing to the default rate control algorithm.Comment: published in IEEE Data Compression Conference, 201

    Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks

    Get PDF
    Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet-of-Things applications, surveillance systems and semantic crawlers of large video repositories, the video capture and the CNN-based semantic analysis parts do not tend to be co-located. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification models that directly ingest AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video bitstreams and applying complex optical flow calculations prior to CNN processing, we only retain motion vector and select texture information at significantly-reduced bitrates and apply no additional processing prior to CNN ingestion. Based on three CNN architectures and two action recognition datasets, we achieve 11%-94% saving in bitrate with marginal effect on classification accuracy. A model-based selection between multiple CNNs increases these savings further, to the point where, if up to 7% loss of accuracy can be tolerated, video classification can take place with as little as 3 kbps for the transport of the required compressed video information to the system implementing the CNN models

    Comparing temporal behavior of fast objective video quality measures on a large-scale database

    Get PDF
    In many application scenarios, video quality assessment is required to be fast and reasonably accurate. The characterisation of objective algorithms by subjective assessment is well established but limited due to the small number of test samples. Verification using large-scale objectively annotated databases provides a complementary solution. In this contribution, three simple but fast measures are compared regarding their agreement on a large-scale database. In contrast to subjective experiments, not only sequence-wise but also framewise agreement can be analyzed. Insight is gained into the behavior of the measures with respect to 5952 different coding configurations of High Efficiency Video Coding (HEVC). Consistency within a video sequence is analyzed as well as across video sequences. The results show that the occurrence of discrepancies depends mostly on the configured coding structure and the source content. The detailed observations stimulate questions on the combined usage of several video quality measures for encoder optimization

    3D video coding and transmission

    Get PDF
    The capture, transmission, and display of 3D content has gained a lot of attention in the last few years. 3D multimedia content is no longer con fined to cinema theatres but is being transmitted using stereoscopic video over satellite, shared on Blu-RayTMdisks, or sent over Internet technologies. Stereoscopic displays are needed at the receiving end and the viewer needs to wear special glasses to present the two versions of the video to the human vision system that then generates the 3D illusion. To be more e ffective and improve the immersive experience, more views are acquired from a larger number of cameras and presented on di fferent displays, such as autostereoscopic and light field displays. These multiple views, combined with depth data, also allow enhanced user experiences and new forms of interaction with the 3D content from virtual viewpoints. This type of audiovisual information is represented by a huge amount of data that needs to be compressed and transmitted over bandwidth-limited channels. Part of the COST Action IC1105 \3D Content Creation, Coding and Transmission over Future Media Networks" (3DConTourNet) focuses on this research challenge.peer-reviewe
    • …
    corecore