1,004 research outputs found

    VIDEO PREPROCESSING BASED ON HUMAN PERCEPTION FOR TELESURGERY

    Get PDF
    Video transmission plays a critical role in robotic telesurgery because of the high bandwidth and high quality requirement. The goal of this dissertation is to find a preprocessing method based on human visual perception for telesurgical video, so that when preprocessed image sequences are passed to the video encoder, the bandwidth can be reallocated from non-essential surrounding regions to the region of interest, ensuring excellent image quality of critical regions (e.g. surgical region). It can also be considered as a quality control scheme that will gracefully degrade the video quality in the presence of network congestion. The proposed preprocessing method can be separated into two major parts. First, we propose a time-varying attention map whose value is highest at the gazing point and falls off progressively towards the periphery. Second, we propose adaptive spatial filtering and the parameters of which are adjusted according to the attention map. By adding visual adaptation to the spatial filtering, telesurgical video data can be compressed efficiently because of the high degree of visual redundancy removal by our algorithm. Our experimental results have shown that with the proposed preprocessing method, over half of the bandwidth can be reduced while there is no significant visual effect for the observer. We have also developed an optimal parameter selecting algorithm, so that when the network bandwidth is limited, the overall visual distortion after preprocessing is minimized

    Computational inference and control of quality in multimedia services

    Get PDF
    Quality is the degree of excellence we expect of a service or a product. It is also one of the key factors that determine its value. For multimedia services, understanding the experienced quality means understanding how the delivered delity, precision and reliability correspond to the users' expectations. Yet the quality of multimedia services is inextricably linked to the underlying technology. It is developments in video recording, compression and transport as well as display technologies that enables high quality multimedia services to become ubiquitous. The constant evolution of these technologies delivers a steady increase in performance, but also a growing level of complexity. As new technologies stack on top of each other the interactions between them and their components become more intricate and obscure. In this environment optimizing the delivered quality of multimedia services becomes increasingly challenging. The factors that aect the experienced quality, or Quality of Experience (QoE), tend to have complex non-linear relationships. The subjectively perceived QoE is hard to measure directly and continuously evolves with the user's expectations. Faced with the diculty of designing an expert system for QoE management that relies on painstaking measurements and intricate heuristics, we turn to an approach based on learning or inference. The set of solutions presented in this work rely on computational intelligence techniques that do inference over the large set of signals coming from the system to deliver QoE models based on user feedback. We furthermore present solutions for inference of optimized control in systems with no guarantees for resource availability. This approach oers the opportunity to be more accurate in assessing the perceived quality, to incorporate more factors and to adapt as technology and user expectations evolve. In a similar fashion, the inferred control strategies can uncover more intricate patterns coming from the sensors and therefore implement farther-reaching decisions. Similarly to natural systems, this continuous adaptation and learning makes these systems more robust to perturbations in the environment, longer lasting accuracy and higher eciency in dealing with increased complexity. Overcoming this increasing complexity and diversity is crucial for addressing the challenges of future multimedia system. Through experiments and simulations this work demonstrates that adopting an approach of learning can improve the sub jective and objective QoE estimation, enable the implementation of ecient and scalable QoE management as well as ecient control mechanisms

    Efficient and Effective Schemes for Streaming Media Delivery

    Get PDF
    The rapid expansion of the Internet and the increasingly wide deployment of wireless networks provide opportunities to deliver streaming media content to users at anywhere, anytime. To ensure good user experience, it is important to battle adversary effects, such as delay, loss and jitter. In this thesis, we first study efficient loss recovery schemes, which require pure XOR operations. In particular, we propose a novel scheme capable of recovering up to 3 packet losses, and it has the lowest complexity among all known schemes. We also propose an efficient algorithm for array codes decoding, which achieves significant throughput gain and energy savings over conventional codes. We believe these schemes are applicable to streaming applications, especially in wireless environments. We then study quality adaptation schemes for client buffer management. Our control-theoretic approach results in an efficient online rate control algorithm with analytically tractable performance. Extensive experimental results show that three goals are achieved: fast startup, continuous playback in the face of severe congestion, and maximal quality and smoothness over the entire streaming session. The scheme is later extended to streaming with limited quality levels, which is then directly applicable to existing systems

    Spatially Scalable Video Coding (SSVC) Using Motion Compensated Recursive Temporal Filtering (MCRTF)

    Get PDF
    Through the following years, streaming makers will be progressively tasked supplying enhanced streams of video to gadgets as mobile phones and set top boxes, alongside diverse quality variants for clients to get content on general Internet. While there have been various ways to deal with this issue, including different bit rate feature, one exceptionally solid competitor will be a H.264 expansion called Scalable Video Coding ( SVC). It encodes video into "layers," beginning with the "base" layer, which contains the most minimal information of the bit-stream, and then moving towards “enhanced layers” which includes the information to scale up the output. Also SVC gives support for different resolutions inside a single compressed bit stream which is known as spatial scalabilility. In this thesis a problem on SSVC has been addressed. The video sequences had been made scalable in spatial domain. In order to make it more efficient for real time applications, motion compensated recursive temporal filtering (MCRTF) has been implemented. This scheme enhances the efficiency of the components of a visual signal. The temporal filter used here helps in reducing noisearising from the plurality of the frames and the improvised output with reduced noise is used in the process of predictive encoding. Also it eliminates the inherent drift, which arises due to difference between encoder and decoder. As visual signals are always subjected to temporal correlation, motion compensation from the adjacent frames and using it as the reference during the process of predictive coding is of prior importance. The conventional and the proposed method have been used during the encoding process of various video sequences in the spatial domain and an analytical study on that has been carried ou

    Two-Pass Rate Control for Improved Quality of Experience in UHDTV Delivery

    Get PDF

    A comprehensive video codec comparison

    Get PDF
    In this paper, we compare the video codecs AV1 (version 1.0.0-2242 from August 2019), HEVC (HM and x265), AVC (x264), the exploration software JEM which is based on HEVC, and the VVC (successor of HEVC) test model VTM (version 4.0 from February 2019) under two fair and balanced configurations: All Intra for the assessment of intra coding and Maximum Coding Efficiency with all codecs being tuned for their best coding efficiency settings. VTM achieves the highest coding efficiency in both configurations, followed by JEM and AV1. The worst coding efficiency is achieved by x264 and x265, even in the placebo preset for highest coding efficiency. AV1 gained a lot in terms of coding efficiency compared to previous versions and now outperforms HM by 24% BD-Rate gains. VTM gains 5% over AV1 in terms of BD-Rates. By reporting separate numbers for JVET and AOM test sequences, it is ensured that no bias in the test sequences exists. When comparing only intra coding tools, it is observed that the complexity increases exponentially for linearly increasing coding efficiency
    corecore