1,305 research outputs found

    Effect of Video Streaming Space–Time Characteristics on Quality of Transmission over Wireless Telecommunication Networks

    Get PDF
    The spate in popularity of multimedia applications has led to the need for optimization of bandwidth allocation and usage in telecommunication networks. Modern telecommunication networks should by their definition be able to maintain the quality of different applications with different Quality of Service (QoS) levels. QoS requirements are generally dependent on the parameters of network and application layers of the OSI model. At the application layer QoS depends on factors such as resolution, bit rate, frame rate, video type, audio codecs, etc. At the network layer, distortions such as delay, jitter, packet loss, etc. are introduced. This paper presents simulation results of modeling video streaming over wireless communications networks. The differences in spatial and time characteristics of the different subject groups were taken into account. Analysis of the influence of bit error rate (BER) and bit rate for video quality is also presented. Simulation showed that different video subject groups affect the perceived quality differently when transmitted over networks. We show conclusively that in a transmission network with a small error probabilities (BER = 10-6, BER = 10-5), the minimum bit rate (128 kbps) guarantees an acceptable video quality, corresponding to MOS > 3 for all types of frames

    Comparing objective visual quality impairment detection in 2D and 3D video sequences

    Get PDF
    The skill level of teleoperator plays a key role in the telerobotic operation. However, plenty of experiments are required to evaluate the skill level in a conventional assessment. In this paper, a novel brain-based method of skill assessment is introduced, and the relationship between the teleoperator's brain states and skill level is first investigated based on a kernel canonical correlation analysis (KCCA) method. The skill of teleoperator (SoT) is defined by a statistic method using the cumulative probability function (CDF). Five indicators are extracted from the electroencephalo-graph (EEG) of the teleoperator to represent the brain states during the telerobotic operation. By using the KCCA algorithm in modeling the relationship between the SoT and the brain states, the correlation has been proved. During the telerobotic operation, the skill level of teleoperator can be well predicted through the brain states. © 2013 IEEE.Link_to_subscribed_fulltex

    VIQID: a no-reference bit stream-based visual quality impairment detector

    Get PDF
    In order to ensure adequate quality towards the end users at all time, video service providers are getting more interested in monitoring their video streams. Objective video quality metrics provide a means of measuring (audio)visual quality in an automated manner. Unfortunately, most of the current existing metrics cannot be used for real-time monitoring due to their dependencies on the original video sequence. In this paper we present a new objective video quality metric which classifies packet loss as visible or invisible based on information extracted solely from the captured encoded H.264/AVC video bit stream. Our results show that the visibility of packet loss can be predicted with a high accuracy, without the need for deep packet inspection. This enables service providers to monitor quality in real-time

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution

    Advanced methods for the evaluation of television picture quality : proceedings of the MOSAIC workshop, Eindhoven, 18-19 September 1995

    Get PDF

    Complexity management of H.264/AVC video compression.

    Get PDF
    The H. 264/AVC video coding standard offers significantly improved compression efficiency and flexibility compared to previous standards. However, the high computational complexity of H. 264/AVC is a problem for codecs running on low-power hand held devices and general purpose computers. This thesis presents new techniques to reduce, control and manage the computational complexity of an H. 264/AVC codec. A new complexity reduction algorithm for H. 264/AVC is developed. This algorithm predicts "skipped" macroblocks prior to motion estimation by estimating a Lagrange ratedistortion cost function. Complexity savings are achieved by not processing the macroblocks that are predicted as "skipped". The Lagrange multiplier is adaptively modelled as a function of the quantisation parameter and video sequence statistics. Simulation results show that this algorithm achieves significant complexity savings with a negligible loss in rate-distortion performance. The complexity reduction algorithm is further developed to achieve complexity-scalable control of the encoding process. The Lagrangian cost estimation is extended to incorporate computational complexity. A target level of complexity is maintained by using a feedback algorithm to update the Lagrange multiplier associated with complexity. Results indicate that scalable complexity control of the encoding process can be achieved whilst maintaining near optimal complexity-rate-distortion performance. A complexity management framework is proposed for maximising the perceptual quality of coded video in a real-time processing-power constrained environment. A real-time frame-level control algorithm and a per-frame complexity control algorithm are combined in order to manage the encoding process such that a high frame rate is maintained without significantly losing frame quality. Subjective evaluations show that the managed complexity approach results in higher perceptual quality compared to a reference encoder that drops frames in computationally constrained situations. These novel algorithms are likely to be useful in implementing real-time H. 264/AVC standard encoders in computationally constrained environments such as low-power mobile devices and general purpose computers

    High dynamic range video compression exploiting luminance masking

    Get PDF
    • …
    corecore