33 research outputs found

    Efficient high-resolution video compression scheme using background and foreground layers

    Get PDF
    Video coding using dynamic background frame achieves better compression compared to the traditional techniques by encoding background and foreground separately. This process reduces coding bits for the overall frame significantly; however, encoding background still requires many bits that can be compressed further for achieving better coding efficiency. The cuboid coding framework has been proven to be one of the most effective methods of image compression which exploits homogeneous pixel correlation within a frame and has better alignment with object boundary compared to traditional block-based coding. In a video sequence, the cuboid-based frame partitioning varies with the changes of the foreground. However, since the background remains static for a group of pictures, the cuboid coding exploits better spatial pixel homogeneity. In this work, the impact of cuboid coding on the background frame for high-resolution videos (Ultra-High-Definition (UHD) and 360-degree videos) is investigated using the multilayer framework of SHVC. After the cuboid partitioning, the method of coarse frame generation has been improved with a novel idea by keeping human-visual sensitive information. Unlike the traditional SHVC scheme, in the proposed method, cuboid coded background and the foreground are encoded in separate layers in an implicit manner. Simulation results show that the proposed video coding method achieves an average BD-Rate reduction of 26.69% and BD-PSNR gain of 1.51 dB against SHVC with significant encoding time reduction for both UHD and 360 videos. It also achieves an average of 13.88% BD-Rate reduction and 0.78 dB BD-PSNR gain compared to the existing relevant method proposed by X. Hoang Van. © 2013 IEEE

    Multiview Video Coding for Virtual Reality

    Get PDF
    Virtual reality (VR) is one of the emerging technologies in recent years. It brings a sense of real world experience in simulated environments, hence, it is being used in many applications for example in live sporting events, music recordings and in many other interactive multimedia applications. VR makes use of multimedia content, and videos are a major part of it. VR videos are captured from multiple directions to cover the entire 360 field-of-view. It usually employs, multiple cameras of wide field-of-view such as fisheye lenses and the camera arrangement can also vary from linear to spherical set-ups. Videos in VR system are also subjected to constraints such as, variations in network bandwidth, heterogeneous mobile devices with limited decoding capacity, adaptivity for view switching in the display. The uncompressed videos from multiview cameras are redundant and impractical for storage and transmission. The existing video coding standards compresses the multiview videos effi ciently. However, VR systems place certain limitations on the video and camera arrangements, such as, it assumes rectilinear properties for video, translational motion model for prediction and the camera set-up to be linearly arranged. The aim of the thesis is to propose coding schemes which are compliant to the current video coding standards of H.264/AVC and its successor H.265/HEVC, the current state-of-the-art and multiview/scalable extensions. This thesis presents methods that compress the multiview videos which are captured from eight cameras that are arranged spherically, pointing radially outwards. The cameras produce circular fi sheye videos of 195 degree field-of-view. The final goal is to present methods, which optimize the bitrate in both storage and transmission of videos for the VR system. The presented methods can be categorized into two groups: optimizing storage bitrate and optimizing streaming bitrate of multiview videos. In the storage bitrate category, six methods were experimented. The presented methods competed against simulcast coding of individual views. The coding schemes were experimented with two data sets of 8 views each. The method of scalable coding with inter-layer prediction in all frames outperformed simulcast coding with approximately 7.9%. In the case of optimizing streaming birates, five methods were experimented. The method of scalable plus multiview skip-coding outperformed the simulcast method of coding by 36% on average. Future work will focus on pre-processing the fi sheye videos to rectilinear videos, in-order to fit them to the current translational model of the video coding standards. Moreover, the methods will be tested in comprehensive applications and system requirements

    EMB: Efficient Multimedia Broadcast in Multi-tier Mobile Networks

    Get PDF
    Multimedia broadcast and multicast services (MBMS) in mobile networks has been widely addressed, however an investigation of such a technology in emerging, multi-tier, scenarios is still lacking. Notably, user clustering and resource allocation are extremely challenging in multi-tier networks, and imperative to maximize system capacity and improve quality of user-experience (QoE) in MBMS. Thus, in this paper we propose a clustering and resource allocation approach, named EMB, which specifically addresses heterogeneous networks and accounts for the fact that multimedia content is adaptively encoded into scalable layers depending on the QoE requirements and channel conditions of the heterogeneous users. Importantly, we prove that our clustering algorithm yields Pareto efficient broadcasting areas, multimedia encoding parameters, and re- source allocation, in a way that is also fair to the users. Fur- thermore, numerical results obtained under realistic conditions and using real-world video content, show that the proposed EMB results in lower churn count (i.e., higher number of served users), higher throughput, and increased QoE, while using fewer network resources

    5G-QoE:QoE Modelling for Ultra-HD Video Streaming in 5G Networks

    Get PDF
    Traffic on future fifth-generation (5G) mobile networks is predicted to be dominated by challenging video applications such as mobile broadcasting, remote surgery and augmented reality, demanding real-time, and ultra-high quality delivery. Two of the main expectations of 5G networks are that they will be able to handle ultra-high-definition (UHD) video streaming and that they will deliver services that meet the requirements of the end user's perceived quality by adopting quality of experience (QoE) aware network management approaches. This paper proposes a 5G-QoE framework to address the QoE modeling for UHD video flows in 5G networks. Particularly, it focuses on providing a QoE prediction model that is both sufficiently accurate and of low enough complexity to be employed as a continuous real-time indicator of the 'health' of video application flows at the scale required in future 5G networks. The model has been developed and implemented as part of the EU 5G PPP SELFNET autonomic management framework, where it provides a primary indicator of the likely perceptual quality of UHD video application flows traversing a realistic multi-tenanted 5G mobile edge network testbed. The proposed 5G-QoE framework has been implemented in the 5G testbed, and the high accuracy of QoE prediction has been validated through comparing the predicted QoE values with not only subjective testing results but also empirical measurements in the testbed. As such, 5G-QoE would enable a holistic video flow self-optimisation system employing the cutting-edge Scalable H.265 video encoding to transmit UHD video applications in a QoE-aware manner

    Efficient VVC Intra Prediction Based on Deep Feature Fusion and Probability Estimation

    Full text link
    The ever-growing multimedia traffic has underscored the importance of effective multimedia codecs. Among them, the up-to-date lossy video coding standard, Versatile Video Coding (VVC), has been attracting attentions of video coding community. However, the gain of VVC is achieved at the cost of significant encoding complexity, which brings the need to realize fast encoder with comparable Rate Distortion (RD) performance. In this paper, we propose to optimize the VVC complexity at intra-frame prediction, with a two-stage framework of deep feature fusion and probability estimation. At the first stage, we employ the deep convolutional network to extract the spatialtemporal neighboring coding features. Then we fuse all reference features obtained by different convolutional kernels to determine an optimal intra coding depth. At the second stage, we employ a probability-based model and the spatial-temporal coherence to select the candidate partition modes within the optimal coding depth. Finally, these selected depths and partitions are executed whilst unnecessary computations are excluded. Experimental results on standard database demonstrate the superiority of proposed method, especially for High Definition (HD) and Ultra-HD (UHD) video sequences.Comment: 10 pages, 10 figure

    Receiver-Driven Video Adaptation

    Get PDF
    In the span of a single generation, video technology has made an incredible impact on daily life. Modern use cases for video are wildly diverse, including teleconferencing, live streaming, virtual reality, home entertainment, social networking, surveillance, body cameras, cloud gaming, and autonomous driving. As these applications continue to grow more sophisticated and heterogeneous, a single representation of video data can no longer satisfy all receivers. Instead, the initial encoding must be adapted to each receiver's unique needs. Existing adaptation strategies are fundamentally flawed, however, because they discard the video's initial representation and force the content to be re-encoded from scratch. This process is computationally expensive, does not scale well with the number of videos produced, and throws away important information embedded in the initial encoding. Therefore, a compelling need exists for the development of new strategies that can adapt video content without fully re-encoding it. To better support the unique needs of smart receivers, diverse displays, and advanced applications, general-use video systems should produce and offer receivers a more flexible compressed representation that supports top-down adaptation strategies from an original, compressed-domain ground truth. This dissertation proposes an alternate model for video adaptation that addresses these challenges. The key idea is to treat the initial compressed representation of a video as the ground truth, and allow receivers to drive adaptation by dynamically selecting which subsets of the captured data to receive. In support of this model, three strategies for top-down, receiver-driven adaptation are proposed. First, a novel, content-agnostic entropy coding technique is implemented in which symbols are selectively dropped from an input abstract symbol stream based on their estimated probability distributions to hit a target bit rate. Receivers are able to guide the symbol dropping process by supplying the encoder with an appropriate rate controller algorithm that fits their application needs and available bandwidths. Next, a domain-specific adaptation strategy is implemented for H.265/HEVC coded video in which the prediction data from the original source is reused directly in the adapted stream, but the residual data is recomputed as directed by the receiver. By tracking the changes made to the residual, the encoder can compensate for decoder drift to achieve near-optimal rate-distortion performance. Finally, a fully receiver-driven strategy is proposed in which the syntax elements of a pre-coded video are cataloged and exposed directly to clients through an HTTP API. Instead of requesting the entire stream at once, clients identify the exact syntax elements they wish to receive using a carefully designed query language. Although an implementation of this concept is not provided, an initial analysis shows that such a system could save bandwidth and computation when used by certain targeted applications.Doctor of Philosoph

    Available Bandwidth Estimation for Adaptive Video Streaming in Mobile Ad Hoc

    Full text link
    [EN] We propose in this paper an algorithm for available bandwidth estimation in mobile ad hoc networks and its integration into a conventional routing protocol like AODV for improving the rate-adaptive video streaming. We have introduced in our approach a local estimation of the available bandwidth as well as a prediction of the consumed bandwidth. This information allows video application to adjust its transmission rate avoiding network congestion. We conducted a performance evaluation of our solution through simulation experiments using two network scenarios. In the simulation study, transmission of video streams encoded with the H.264/MPEG-4 advanced video coding standard was evaluated. The results reveal performance improvements in terms of packet loss, delay and PSNR.Castellanos, W.; Guerri Cebollada, JC.; Arce Vila, P. (2019). Available Bandwidth Estimation for Adaptive Video Streaming in Mobile Ad Hoc. International Journal of Wireless Information Networks. 26(3):218-229. https://doi.org/10.1007/s10776-019-00431-0S21822926

    High-Level Synthesis Based VLSI Architectures for Video Coding

    Get PDF
    High Efficiency Video Coding (HEVC) is state-of-the-art video coding standard. Emerging applications like free-viewpoint video, 360degree video, augmented reality, 3D movies etc. require standardized extensions of HEVC. The standardized extensions of HEVC include HEVC Scalable Video Coding (SHVC), HEVC Multiview Video Coding (MV-HEVC), MV-HEVC+ Depth (3D-HEVC) and HEVC Screen Content Coding. 3D-HEVC is used for applications like view synthesis generation, free-viewpoint video. Coding and transmission of depth maps in 3D-HEVC is used for the virtual view synthesis by the algorithms like Depth Image Based Rendering (DIBR). As first step, we performed the profiling of the 3D-HEVC standard. Computational intensive parts of the standard are identified for the efficient hardware implementation. One of the computational intensive part of the 3D-HEVC, HEVC and H.264/AVC is the Interpolation Filtering used for Fractional Motion Estimation (FME). The hardware implementation of the interpolation filtering is carried out using High-Level Synthesis (HLS) tools. Xilinx Vivado Design Suite is used for the HLS implementation of the interpolation filters of HEVC and H.264/AVC. The complexity of the digital systems is greatly increased. High-Level Synthesis is the methodology which offers great benefits such as late architectural or functional changes without time consuming in rewriting of RTL-code, algorithms can be tested and evaluated early in the design cycle and development of accurate models against which the final hardware can be verified
    corecore