8 research outputs found

    Motion hints based video coding

    Full text link
    The persistent growth of video-based applications is heavily dependent on the advancements in video coding systems. Modern video codecs use the motion model itself to describe the geometric boundaries of moving objects in video sequences and thereby spend a significant portion of their bit rate refining the motion description in regions where motion discontinuities exist. This explicit communication of motion introduces redundancy, since some aspects of the motion can at least partially be inferred from the reference frames. In this thesis work, a novel bi-directional motion hints based prediction paradigm is proposed that moves away from the traditional redundant approach of careful partitioning around object boundaries by exploiting the spatial structure of the reference frames to infer appropriate boundaries for the intermediate ones. Motion hint provide a global description of motion over specific domain. Fundamentally this is related to the segmentation of foreground from background regions where the foreground and background motions are the motion hints. The appealing thing about motion hints is that they are continuous and invertible, even though the observed motion field for a frame is discontinuous and non-invertible. Experimental results show that at low bit rate applications, the motion hints based coder achieved a rate-distortion (RD) gain of 0.81 dB, or equivalently 13.38% savings in bit rate over the H.264/AVC reference. In a hybrid setting, this gain increased to 0.94 dB and 20.41% bit rebate is obtained. If both low and high bit rate scenarios are considered then the hybrid coder showed a RD performance of 0.80 dB, or equivalently 16.57% savings in bit rate. The usage of higher fractional pixel accurate motion hint, predictive coding of motion hint, a memory-based initialization for motion hint estimation improved the RD gain to 0.85 dB and 17.55% of bit rebate. The prediction framework is highly flexible in the sense that the motion model order for the hints can be content adaptive i.e. it can accommodate different motion models like affine, elastic, etc. Detecting motion discontinuity macroblocks (MBs) is a challenging task and the prediction paradigm managed to detect a significant number of such MBs. If the motion hints based prediction is used as a prediction mode for MBs, at low bit rates almost 50% of the motion discontinuity MBs chose to use affine hint mode and this number increased to 60% if elastic hint is used

    Human-Machine Collaborative Video Coding Through Cuboidal Partitioning

    Full text link
    Video coding algorithms encode and decode an entire video frame while feature coding techniques only preserve and communicate the most critical information needed for a given application. This is because video coding targets human perception, while feature coding aims for machine vision tasks. Recently, attempts are being made to bridge the gap between these two domains. In this work, we propose a video coding framework by leveraging on to the commonality that exists between human vision and machine vision applications using cuboids. This is because cuboids, estimated rectangular regions over a video frame, are computationally efficient, has a compact representation and object centric. Such properties are already shown to add value to traditional video coding systems. Herein cuboidal feature descriptors are extracted from the current frame and then employed for accomplishing a machine vision task in the form of object detection. Experimental results show that a trained classifier yields superior average precision when equipped with cuboidal features oriented representation of the current test frame. Additionally, this representation costs 7% less in bit rate if the captured frames are need be communicated to a receiver

    Human-machine collaborative video coding through cuboidal partitioning

    No full text
    Video coding algorithms encode and decode an entire video frame while feature coding techniques only preserve and communicate the most critical information needed for a given application. This is because video coding targets human perception, while feature coding aims for machine vision tasks. Recently, attempts are being made to bridge the gap between these two domains. In this work, we propose a video coding framework by leveraging on to the commonality that exists between human vision and machine vision applications using cuboids. This is because cuboids, estimated rectangular regions over a video frame, are computationally efficient, has a compact representation and object centric. Such properties are already shown to add value to traditional video coding systems. Herein cuboidal feature descriptors are extracted from the current frame and then employed for accomplishing a machine vision task in the form of object detection. Experimental results show that a trained classifier yields superior average precision when equipped with cuboidal features oriented representation of the current test frame. Additionally, this representation costs 7% less in bit rate if the captured frames are need be communicated to a receiver. © 2021 IEEE

    Dynamic point cloud geometry compression using cuboid based commonality modelling framework

    No full text
    Point cloud in its uncompressed format require very high data rate for storage and transmission. The video based point cloud compression (V-PCC) technique projects a dynamic point cloud into geometry and texture video sequences. The projected geometry and texture video frames are then encoded using modern video coding standard like HEVC. However, HEVC encoder is unable to exploit the global commonality that exists within a geometry frame and between successive geometry frames to a greater extent. This is because in HEVC, the current frame partitioning starts from a rigid 64 × 64 pixels level without considering the structure of the scene need be coded. In this paper, an improved commonality modeling framework is proposed, by leveraging on cuboid-based frame partitioning, to encode point cloud geometry frames. The associated frame-partitioning scheme is based on statistical properties of the current geometry frame and therefore yields a flexible block partitioning structure composed of cuboids. Additionally, the proposed commonality modeling approach is computationally efficient and has a compact representation. Experimental results show that if the V-PCC reference encoder is augmented by the proposed commonality modeling technique, a bit rate savings of 2.71% and 4.25% are achieved for full body and upper body of human point clouds’ geometry sequences respectively. © 2021 IEEE

    Dynamic mesh commonality modeling using the cuboidal partitioning

    No full text
    For 3D object representation, volumetric contents like meshes and point clouds provide suitable formats. However, a dynamic mesh sequence may require significantly large amount of data because it consists of information that varies with time. Hence, for the facilitation of storage and transmission of such content, efficient compression technologies are required. MPEG has started standardization activities aiming to develop a mesh compression standard that would be able to handle dynamic meshes with time varying connectivity information and time varying attribute maps. The attribute maps are features associated with the mesh surface and stored as 2D images/videos. In this paper, we propose to capture the commonality information in the dynamic mesh attribute maps using the cuboidal partitioning algorithm. This algorithm is capable of modeling both the global and local commonality within an image in a compact and computationally efficient way. Experimental results show that the proposed approach can outperform the anchor HEVC codec, suggested by MPEG to encode such sequences, with a bit rate savings of up to 3.66%. © 2022 IEEE

    A commonality modeling framework for enhanced video coding leveraging on the cuboidal partitioning based representation of frames

    No full text
    Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. Modern video coding systems are block-based wherein commonality modeling is carried out only from the perspective of the block that need be coded next. In this work, we argue for a commonality modeling approach that can provide a seamless blending between global and local homogeneity information. For this purpose, at first the frame that need be coded, is recursively partitioned into rectangular regions based on the homogeneity information of the entire frame. After that each obtained rectangular region's feature descriptor is taken to be the average value of all the pixels' intensities encompassing the region. In this way, the proposed approach generates a coarse representation of the current frame by minimizing both global and local commonality. This coarse frame is computationally simple and has a compact representation. It attempts to preserve important structural properties of the current frame which can be viewed subjectively as well as from improved rate-distortion performance of a reference scalable HEVC coder that employs the coarse frame as a reference frame for encoding the current frame. © 1999-2012 IEEE

    Dynamic point cloud compression using a cuboid oriented discrete cosine based motion model

    No full text
    Immersive media representation format based on point clouds has underpinned significant opportunities for extended reality applications. Point cloud in its uncompressed format require very high data rate for storage and transmission. The video based point cloud compression technique projects a dynamic point cloud into geometry and texture video sequences. The projected texture video is then coded using modern video coding standard like HEVC. Since the properties of projected texture video frames are different from traditional video frames, HEVC-based commonality modeling can be inefficient. An improved commonality modeling technique is proposed that employs discrete cosine basis oriented motion models and the domains of such models are approximated by homogeneous regions called cuboids. Experimental results show that the proposed commonality modeling technique can yield savings in bit rate of up to 4.17%. ©2021 IEE

    DISCRETE COSINE BASIS ORIENTED MOTION MODELING WITH CUBOIDAL APPLICABILITY REGIONS FOR VERSATILE VIDEO CODING

    No full text
    International audienceThe relentless expansion of video based applications is underpinned by video coding technologies. The latest video coding standard i.e. versatile video coding (VVC) can provide superior compression performance than its predecessors. In this regard, motion modeling plays a central role. Experimental results showed that the discrete cosine basis oriented motion model can describe complex motion better than an affine motion model, adopted in the VVC. Hence, in this paper we propose to augment the VVC motion modeling technique with a set of discrete cosine basis oriented motion models and the applicability region of each such motion model is determined by non-overlapping rectangular regions, known as cuboids. Experimental results show a bit rate savings of up to 2.37% is achievable with respect to a VVC reference
    corecore