161 research outputs found

    No-reference bitstream-based impairment detection for high efficiency video coding

    Get PDF
    Video distribution over error-prone Internet Protocol (IP) networks results in visual impairments on the received video streams. Objective impairment detection algorithms are crucial for maintaining a high Quality of Experience (QoE) as provided with IPTV distribution. There is a lot of research invested in H.264/AVC impairment detection models and questions rise if these turn obsolete with a transition to the successor of H.264/AVC, called High Efficiency Video Coding (HEVC). In this paper, first we show that impairments on HEVC compressed sequences are more visible compaired to H.264/AVC encoded sequences. We also show that an impairment detection model designed for H.264/AVC could be reused on HEVC, but that caution is advised. A more accurate model taking into account content classification needed slight modification to remain applicable for HEVC compression video content

    Packet loss visibility across SD, HD, 3D, and UHD video streams

    Get PDF
    The trend towards video streaming with increased spatial resolutions and dimensions, SD, HD, 3D, and 4kUHD, even for portable devices has important implications for displayed video quality. There is an interplay between packetization, packet loss visibility, choice of codec, and viewing conditions, which implies that prior studies at lower resolutions may not be as relevant. This paper presents two sets of experiments, the one at a Variable BitRate (VBR) and the other at a Constant BitRate (CBR), which highlight different aspects of the interpretation. The latter experiments also compare and contrast encoding with either an H.264 or an High Efficiency Video Coding (HEVC) codec, with all results recorded as objective Mean Opinion Score (MOS). The video quality assessments will be of interest to those considering: the bitrates and expected quality in error-prone environments; or, in fact, whether to use a reliable transport protocol to prevent all errors, at a cost in jitter and latency, rather than tolerate low levels of packet errors

    Keyframe insertion : enabling low-latency random access and packet loss repair

    Get PDF
    From a video coding perspective, there are two challenges when performing live video distribution over error-prone networks, such as wireless networks: random access and packet loss repair. There is a scarceness of solutions that do not impact steady-state usage and users with reliable connections. The proposed solution minimizes this impact by complementing a compression-efficient video stream with a companion stream solely consisting of keyframes. Although the core idea is not new, this paper is the first work to provide restrictions and modifications necessary to make this idea work using the High-Efficiency Video Coding (H.265/HEVC) compression standard. Additionally, through thorough quantification, insight is provided on how to provide low-latency fast channel switching capabilities and error recovery at low quality impact, i.e., less than 0.94 average Video Multimethod Assessment Fusion (VMAF) score decrease. Finally, worst-case drift artifacts are described and visualized such that the reader gets an overall picture of using the keyframe insertion technique

    An Effective Ultrasound Video Communication System Using Despeckle Filtering and HEVC

    Get PDF
    The recent emergence of the high-efficiency video coding (HEVC) standard promises to deliver significant bitrate savings over current and prior video compression standards, while also supporting higher resolutions that can meet the clinical acquisition spatiotemporal settings. The effective application of HEVC to medical ultrasound necessitates a careful evaluation of strict clinical criteria that guarantee that clinical quality will not be sacrificed in the compression process. Furthermore, the potential use of despeckle filtering prior to compression provides for the possibility of significant additional bitrate savings that have not been previously considered. This paper provides a thorough comparison of the use of MPEG-2, H.263, MPEG-4, H.264/AVC, and HEVC for compressing atherosclerotic plaque ultrasound videos. For the comparisons, we use both subjective and objective criteria based on plaque structure and motion. For comparable clinical video quality, experimental evaluation on ten videos demonstrates that HEVC reduces bitrate requirements by as much as 33.2% compared to H.264/AVC and up to 71% compared to MPEG-2. The use of despeckle filtering prior to compression is also investigated as a method that can reduce bitrate requirements through the removal of higher frequency components without sacrificing clinical quality. Based on the use of three despeckle filtering methods with both H.264/AVC and HEVC, we find that prior filtering can yield additional significant bitrate savings. The best performing despeckle filter (DsFlsmv) achieves bitrate savings of 43.6% and 39.2% compared to standard nonfiltered HEVC and H.264/AVC encoding, respectively

    Optimized Visual Internet of Things in Video Processing for Video Streaming

    Get PDF
    The global expansion of the Visual Internet of Things (VIoT) has enabled various new applications during the last decade through the interconnection of a wide range of devices and sensors.Frame freezing and buffering are the major artefacts in broad area of multimedia networking applications occurring due to significant packet loss and network congestion. Numerous studies have been carried out in order to understand the impact of packet loss on QoE for a wide range of applications. This paper improves the video streaming quality by using the proposed framework Lossy Video Transmission (LVT)  for simulating the effect of network congestion on the performance of  encrypted static images sent over wireless sensor networks.The simulations are intended for analysing video quality and determining packet drop resilience during video conversations.The assessment of emerging trends in quality measurement, including picture preference, visual attention, and audio visual quality is checked. To appropriately quantify the video quality loss caused by the encoding system, various encoders compress video sequences at various data rates.Simulation results for different QoE metrics with respect to user developed videos have been demonstrated which outperforms the existing metrics

    How to Train No Reference Video Quality Measures for New Coding Standards using Existing Annotated Datasets?

    Get PDF
    Subjective experiments are important for developing objective Video Quality Measures (VQMs). However, they are time-consuming and resource-demanding. In this context, being able to reuse existing subjective data on previous video coding standards to train models capable of predicting the perceptual quality of video content processed with newer codecs acquires significant importance. This paper investigates the possibility of generating an HEVC encoded Processed Video Sequence (PVS) in such a way that its perceptual quality is as similar as possible to that of an AVC encoded PVS whose quality has already been assessed by human subjects. In this way, the perceptual quality of the newly generated HEVC encoded PVS may be annotated approximately with the Mean Opinion Score (MOS) of the related AVC encoded PVS. To show the effectiveness of our approach, we compared the performance of a simple and low complexity but yet effective no reference hybrid model trained on the data generated with our approach with the same model trained on data collected in the context of a pristine subjective experiment. In addition, we merged seven subjective experiments such that they can be used as one aligned dataset containing either original HEVC bitstreams or the newly generated data explained in our proposed approach. The merging process accounts for the differences in terms of quality scale, chosen assessment method and context influence factors. This yields a large annotated dataset of HEVC sequences that is made publicly available for the design and training of no reference hybrid VQMs for HEVC encoded content

    Spatio-temporal error concealment technique for high order multiple description coding schemes including subjective assessment

    Get PDF
    International audienceError resilience (ER) is an important tool in video coding to maximize the quality of Experience (QoE). The prediction process in video coding became complex which yields an unsatisfying video quality when NALunit packets are lost in error-prone channels. There are different ER techniques and multiple description coding (MDC) is one of the promising technique for this problem. MDC is categorized into different types and, in this paper, we focus on temporal MDC techniques. In this paper, a new temporal MDC scheme is proposed. In the encoding process, the encoded descriptions contain primary frames and secondary frames (redundant representations). The secondary frames represent the MVs that are predicted from previous primary frames such that the residual signal is set to zero and is not part of the rate distortion optimization. In the decoding process of the lost frames, a weighted average error concealment (EC) strategy is proposed to conceal these frames. The proposed scheme is subjectively evaluated along with other schemes and the results show that the proposed scheme is significantly different from most of other temporal MDC schemes

    Video Content-Based QoE Prediction for HEVC Encoded Videos Delivered over IP Networks

    Get PDF
    The recently released High Efficiency Video Coding (HEVC) standard, which halves the transmission bandwidth requirement of encoded video for almost the same quality when compared to H.264/AVC, and the availability of increased network bandwidth (e.g. from 2 Mbps for 3G networks to almost 100 Mbps for 4G/LTE) have led to the proliferation of video streaming services. Based on these major innovations, the prevalence and diversity of video application are set to increase over the coming years. However, the popularity and success of current and future video applications will depend on the perceived quality of experience (QoE) of end users. How to measure or predict the QoE of delivered services becomes an important and inevitable task for both service and network providers. Video quality can be measured either subjectively or objectively. Subjective quality measurement is the most reliable method of determining the quality of multimedia applications because of its direct link to users’ experience. However, this approach is time consuming and expensive and hence the need for an objective method that can produce results that are comparable with those of subjective testing. In general, video quality is impacted by impairments caused by the encoder and the transmission network. However, videos encoded and transmitted over an error-prone network have different quality measurements even under the same encoder setting and network quality of service (NQoS). This indicates that, in addition to encoder settings and network impairment, there may be other key parameters that impact video quality. In this project, it is hypothesised that video content type is one of the key parameters that may impact the quality of streamed videos. Based on this assertion, parameters related to video content type are extracted and used to develop a single metric that quantifies the content type of different video sequences. The proposed content type metric is then used together with encoding parameter settings and NQoS to develop content-based video quality models that estimate the quality of different video sequences delivered over IP-based network. This project led to the following main contributions: (1) A new metric for quantifying video content type based on the spatiotemporal features extracted from the encoded bitstream. (2) The development of novel subjective test approach for video streaming services. (3) New content-based video quality prediction models for predicting the QoE of video sequences delivered over IP-based networks. The models have been evaluated using subjective and objective methods

    Secure and Efficient Video Transmission in VANET

    Get PDF
    Currently, vehicular communications have become a reality used by various applications, especially applications that broadcast video in real time. However, the video quality received is penalized by the poor characteristics of the transmission channel (availability, non-stationarity, the ration of signal-to-noise, etc.). To improve and ensure minimum video quality at reception, we propose in this work a mechanism entitled “Secure and Efficient Transmission of Videos in VANET (SETV)”. It's based on the "Quality of Experience (QoE)" and using hierarchical packet management. This last is based on the importance of the images of the stream video. To this end, the use of transmission error correction with uneven error protection has proven to be effective in delivering high quality videos with low network overhead. This is done based on the specific details of video encoding and actual network conditions such as signal to noise ratio, network density, vehicle position and current packet loss rate (PLR) not to mention the prediction of the future DPP.Machine learning models were developed on our work to estimate perceived audio-visual quality. The protocol previously gathers information about its neighbouring vehicles to perform distributed jump reinforcement learning. The simulation results obtained for several types of realistic vehicular scenarios show that our proposed mechanism offers significant improvements in terms of video quality on reception and end-to-end delay compared to conventional schemes. The results prove that the proposed mechanism has showed 11% to 18% improvement in video quality and 9% load gain compared to ShieldHEVC

    Video QoS/QoE over IEEE802.11n/ac: A Contemporary Survey

    Get PDF
    The demand for video applications over wireless networks has tremendously increased, and IEEE 802.11 standards have provided higher support for video transmission. However, providing Quality of Service (QoS) and Quality of Experience (QoE) for video over WLAN is still a challenge due to the error sensitivity of compressed video and dynamic channels. This thesis presents a contemporary survey study on video QoS/QoE over WLAN issues and solutions. The objective of the study is to provide an overview of the issues by conducting a background study on the video codecs and their features and characteristics, followed by studying QoS and QoE support in IEEE 802.11 standards. Since IEEE 802.11n is the current standard that is mostly deployed worldwide and IEEE 802.11ac is the upcoming standard, this survey study aims to investigate the most recent video QoS/QoE solutions based on these two standards. The solutions are divided into two broad categories, academic solutions, and vendor solutions. Academic solutions are mostly based on three main layers, namely Application, Media Access Control (MAC) and Physical (PHY) which are further divided into two major categories, single-layer solutions, and cross-layer solutions. Single-layer solutions are those which focus on a single layer to enhance the video transmission performance over WLAN. Cross-layer solutions involve two or more layers to provide a single QoS solution for video over WLAN. This thesis has also presented and technically analyzed QoS solutions by three popular vendors. This thesis concludes that single-layer solutions are not directly related to video QoS/QoE, and cross-layer solutions are performing better than single-layer solutions, but they are much more complicated and not easy to be implemented. Most vendors rely on their network infrastructure to provide QoS for multimedia applications. They have their techniques and mechanisms, but the concept of providing QoS/QoE for video is almost the same because they are using the same standards and rely on Wi-Fi Multimedia (WMM) to provide QoS
    corecore