507 research outputs found

    Video Streaming in Evolving Networks under Fuzzy Logic Control

    Get PDF

    Irregular Variable Length Coding

    Get PDF
    In this thesis, we introduce Irregular Variable Length Coding (IrVLC) and investigate its applications, characteristics and performance in the context of digital multimedia broadcast telecommunications. During IrVLC encoding, the multimedia signal is represented using a sequence of concatenated binary codewords. These are selected from a codebook, comprising a number of codewords, which, in turn, comprise various numbers of bits. However, during IrVLC encoding, the multimedia signal is decomposed into particular fractions, each of which is represented using a different codebook. This is in contrast to regular Variable Length Coding (VLC), in which the entire multimedia signal is encoded using the same codebook. The application of IrVLCs to joint source and channel coding is investigated in the context of a video transmission scheme. Our novel video codec represents the video signal using tessellations of Variable-Dimension Vector Quantisation (VDVQ) tiles. These are selected from a codebook, comprising a number of tiles having various dimensions. The selected tessellation of VDVQ tiles is signalled using a corresponding sequence of concatenated codewords from a Variable Length Error Correction (VLEC) codebook. This VLEC codebook represents a specific joint source and channel coding case of VLCs, which facilitates both compression and error correction. However, during video encoding, only particular combinations of the VDVQ tiles will perfectly tessellate, owing to their various dimensions. As a result, only particular sub-sets of the VDVQ codebook and, hence, of the VLEC codebook may be employed to convey particular fractions of the video signal. Therefore, our novel video codec can be said to employ IrVLCs. The employment of IrVLCs to facilitate Unequal Error Protection (UEP) is also demonstrated. This may be applied when various fractions of the source signal have different error sensitivities, as is typical in audio, speech, image and video signals, for example. Here, different VLEC codebooks having appropriately selected error correction capabilities may be employed to encode the particular fractions of the source signal. This approach may be expected to yield a higher reconstruction quality than equal protection in cases where the various fractions of the source signal have different error sensitivities. Finally, this thesis investigates the application of IrVLCs to near-capacity operation using EXtrinsic Information Transfer (EXIT) chart analysis. Here, a number of component VLEC codebooks having different inverted EXIT functions are employed to encode particular fractions of the source symbol frame. We show that the composite inverted IrVLC EXIT function may be obtained as a weighted average of the inverted component VLC EXIT functions. Additionally, EXIT chart matching is employed to shape the inverted IrVLC EXIT function to match the EXIT function of a serially concatenated inner channel code, creating a narrow but still open EXIT chart tunnel. In this way, iterative decoding convergence to an infinitesimally low probability of error is facilitated at near-capacity channel SNRs

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods

    Dynamic adaptation of streamed real-time E-learning videos over the internet

    Get PDF
    Even though the e-learning is becoming increasingly popular in the academic environment, the quality of synchronous e-learning video is still substandard and significant work needs to be done to improve it. The improvements have to be brought about taking into considerations both: the network requirements and the psycho- physical aspects of the human visual system. One of the problems of the synchronous e-learning video is that the head-and-shoulder video of the instructor is mostly transmitted. This video presentation can be made more interesting by transmitting shots from different angles and zooms. Unfortunately, the transmission of such multi-shot videos will increase packet delay, jitter and other artifacts caused by frequent changes of the scenes. To some extent these problems may be reduced by controlled reduction of the quality of video so as to minimise uncontrolled corruption of the stream. Hence, there is a need for controlled streaming of a multi-shot e-learning video in response to the changing availability of the bandwidth, while utilising the available bandwidth to the maximum. The quality of transmitted video can be improved by removing the redundant background data and utilising the available bandwidth for sending high-resolution foreground information. While a number of schemes exist to identify and remove the background from the foreground, very few studies exist on the identification and separation of the two based on the understanding of the human visual system. Research has been carried out to define foreground and background in the context of e-learning video on the basis of human psychology. The results have been utilised to propose methods for improving the transmission of e-learning videos. In order to transmit the video sequence efficiently this research proposes the use of Feed- Forward Controllers that dynamically characterise the ongoing scene and adjust the streaming of video based on the availability of the bandwidth. In order to satisfy a number of receivers connected by varied bandwidth links in a heterogeneous environment, the use of Multi-Layer Feed-Forward Controller has been researched. This controller dynamically characterises the complexity (number of Macroblocks per frame) of the ongoing video sequence and combines it with the knowledge of availability of the bandwidth to various receivers to divide the video sequence into layers in an optimal way before transmitting it into network. The Single-layer Feed-Forward Controller inputs the complexity (Spatial Information and Temporal Information) of the on-going video sequence along with the availability of bandwidth to a receiver and adjusts the resolution and frame rate of individual scenes to transmit the sequence optimised to give the most acceptable perceptual quality within the bandwidth constraints. The performance of the Feed-Forward Controllers have been evaluated under simulated conditions and have been found to effectively regulate the streaming of real-time e-learning videos in order to provide perceptually improved video quality within the constraints of the available bandwidth

    The Efficient Design of Time-to-Digital Converters

    Get PDF
    • …
    corecore