579 research outputs found

    Advanced solutions for quality-oriented multimedia broadcasting

    Get PDF
    Multimedia content is increasingly being delivered via different types of networks to viewers in a variety of locations and contexts using a variety of devices. The ubiquitous nature of multimedia services comes at a cost, however. The successful delivery of multimedia services will require overcoming numerous technological challenges many of which have a direct effect on the quality of the multimedia experience. For example, due to dynamically changing requirements and networking conditions, the delivery of multimedia content has traditionally adopted a best effort approach. However, this approach has often led to the end-user perceived quality of multimedia-based services being negatively affected. Yet the quality of multimedia content is a vital issue for the continued acceptance and proliferation of these services. Indeed, end-users are becoming increasingly quality-aware in their expectations of multimedia experience and demand an ever-widening spectrum of rich multimedia-based services. As a consequence, there is a continuous and extensive research effort, by both industry and academia, to find solutions for improving the quality of multimedia content delivered to the users; as well, international standards bodies, such as the International Telecommunication Union (ITU), are renewing their effort on the standardization of multimedia technologies. There are very different directions in which research has attempted to find solutions in order to improve the quality of the rich media content delivered over various network types. It is in this context that this special issue on broadcast multimedia quality of the IEEE Transactions on Broadcasting illustrates some of these avenues and presents some of the most significant research results obtained by various teams of researchers from many countries. This special issue provides an example, albeit inevitably limited, of the richness and breath of the current research on multimedia broadcasting services. The research i- - ssues addressed in this special issue include, among others, factors that influence user perceived quality, encoding-related quality assessment and control, transmission and coverage-based solutions and objective quality measurements

    Slight-Delay Shaped Variable Bit Rate (SD-SVBR) Technique for Video Transmission

    Get PDF
    The aim of this thesis is to present a new shaped Variable Bit Rate (VBR) for video transmission, which plays a crucial role in delivering video traffic over the Internet. This is due to the surge of video media applications over the Internet and the video typically has the characteristic of a highly bursty traffic, which leads to the Internet bandwidth fluctuation. This new shaped algorithm, referred to as Slight Delay - Shaped Variable Bit Rate (SD-SVBR), is aimed at controlling the video rate for video application transmission. It is designed based on the Shaped VBR (SVBR) algorithm and was implemented in the Network Simulator 2 (ns-2). SVBR algorithm is devised for real-time video applications and it has several limitations and weaknesses due to its embedded estimation or prediction processes. SVBR faces several problems, such as the occurrence of unwanted sharp decrease in data rate, buffer overflow, the existence of a low data rate, and the generation of a cyclical negative fluctuation. The new algorithm is capable of producing a high data rate and at the same time a better quantization parameter (QP) stability video sequence. In addition, the data rate is shaped efficiently to prevent unwanted sharp increment or decrement, and to avoid buffer overflow. To achieve the aim, SD-SVBR has three strategies, which are processing the next Group of Picture (GoP) video sequence and obtaining the QP-to-data rate list, dimensioning the data rate to a higher utilization of the leaky-bucket, and implementing a QP smoothing method by carefully measuring the effects of following the previous QP value. However, this algorithm has to be combined with a network feedback algorithm to produce a better overall video rate control. A combination of several video clips, which consisted of a varied video rate, has been used for the purpose of evaluating SD-SVBR performance. The results showed that SD-SVBR gains an impressive overall Peak Signal-to-Noise Ratio (PSNR) value. In addition, in almost all cases, it gains a high video rate but without buffer overflow, utilizes the buffer well, and interestingly, it is still able to obtain smoother QP fluctuation

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Fuzzy Logic Control of Adaptive ARQ for Video Distribution over a Bluetooth Wireless Link

    Get PDF
    Bluetooth's default automatic repeat request (ARQ) scheme is not suited to video distribution resulting in missed display and decoded deadlines. Adaptive ARQ with active discard of expired packets from the send buffer is an alternative approach. However, even with the addition of cross-layer adaptation to picture-type packet importance, ARQ is not ideal in conditions of a deteriorating RF channel. The paper presents fuzzy logic control of ARQ, based on send buffer fullness and the head-of-line packet's deadline. The advantage of the fuzzy logic approach, which also scales its output according to picture type importance, is that the impact of delay can be directly introduced to the model, causing retransmissions to be reduced compared to all other schemes. The scheme considers both the delay constraints of the video stream and at the same time avoids send buffer overflow. Tests explore a variety of Bluetooth send buffer sizes and channel conditions. For adverse channel conditions and buffer size, the tests show an improvement of at least 4 dB in video quality compared to nonfuzzy schemes. The scheme can be applied to any codec with I-, P-, and (possibly) B-slices by inspection of packet headers without the need for encoder intervention.</jats:p

    Video Smoothing of Aggregates of Streams with Bandwidth Constraints

    Get PDF
    Compressed variable bit rate (VBR) video transmission is acquiring a growing importance in the telecommunication world. High data rate variability of compressed video over multiple time scales makes an efficient bandwidth resource utilization difficult to obtain. One of the approaches developed to face this problem are smoothing techniques. Various smoothing algorithms that exploit client buffers have been proposed, thus reducing the peak rate and high rate variability by efficiently scheduling the video data to be transmitted over the network. The novel smoothing algorithm proposed in this paper, which represents a significant improvements over the existing methods, performs data scheduling both for a single stream and for stream aggregations, by taking into account available bandwidth constraints. It modifies, whenever possible, the smoothing schedule in such a way as to eliminate frame losses due to available bandwidth limitations. This technique can be applied to any smoothing algorithm already present in literature and can be usefully exploited to minimize losses in multiplexed stream scenarios, like Terrestrial Digital Video Broadcasting (DVB-T), where a specific known available bandwidth must be shared by several multimedia flows. The developed algorithm has been exploited for smoothing stored video, although it can also be quite easily adapted for real time smoothing. The obtained numerical results, compared with the MVBA, another smoothing algorithm that is already presented and discussed in literature, show the effectiveness of the proposed algorithm, in terms of lost video frames, for different multiplexed scenarios

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods
    corecore