220 research outputs found

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively

    Error and Congestion Resilient Video Streaming over Broadband Wireless

    Get PDF
    In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine

    Resource allocation for multimedia streaming over the Internet

    Full text link

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods

    Resource allocation and adaptive scheduling for scalable video streaming

    Get PDF
    The obvious recent advances in areas such as video compression and network architectures allow for the deployment of novel video distribution applications. These have the potential to provide ubiquitous media access to end users. In recent years, applications based on audio and video streaming have turned out to be immensely popular and the Internet has become the most widely used vector for media content distribution, due to its high availability and connectivity. However, the nature of the Internet infrastructure is not adapted to the specific characteristics of multimedia traffic, which presents a certain tolerance to losses, but strict delay and high bandwidth requirements. In this thesis, our goal is to improve the efficiency of media delivery over the existing network architecture. In order to do so we consider the delivery of scalable video in three main delivery scenarios, namely one-to-one client server architectures, one-to-many broadcasting architectures, and many-to-one distributed streaming architectures. First, we propose a distributed media-friendly rate allocation algorithm for the delivery of both finely and coarsely scalable video streams. Unlike existing solutions, our algorithm explicitly takes the characteristics of media streams into consideration. As a result, it provides rate allocations that better fit the heterogeneous characteristics of media streams. We outline an implementation that is robust to random feedback delays and that permits a scalable deployment of the algorithm. The rate allocation that is computed by our algorithm achieves network stability and high bandwidth utilization. It moreover allows to maximize the average received quality for all streams that are delivered in the network. While considering the transmission of coarsely layered streams, we derive conditions on the encoding rates of the video layers. These conditions depend on the allowed end-to-end delay and on the rate allocation algorithm that controls the sending rates. They allow us to take full advantage of the allocated transmission rates. Second, we investigate the problem of jointly addressing the needs of multiple receivers that consume different versions of a layered media stream in a broadcasting scenario. We provide optimal scheduling algorithms that jointly optimize the playback delay and the buffer occupancy at all of these receivers when the used channel is known. Furthermore we analyze low complexity heuristics based optimization techniques, which provide close to optimal results when only limited channel knowledge is available. Finally, we explore the possibility to exploit the inherent network diversity that is provided by the Internet infrastructure. In particular, we consider media delivery schemes where multiple senders are available for the transmission of a scalable video stream to a single client. Such an architecture is referred to as a distributed streaming architecture. It has the benefit of aggregating multiple unreliable channels into a single more robust channel with high availability. Through the use of Fountain codes, we are able to transform the distributed streaming problem into a rate allocation problem of lower complexity. The solution to this problem is shown to depend not only on the average packet loss rate, but also on the average length of packet loss bursts that are observed on each of the available channels. The coding scheme that we suggest enables our system to adapt the streamed content to the network characteristics, as well as to the needs of the receiving client

    Implementation and validation of an adaptive FEC machanism for video transmission

    Get PDF
    This research focuses on investigating the FEC mechanism as an error recovery over a wireless network.The existing adaptive FEC mechanism faces a major drawback, which is the reduction of recovery performance by injecting too many excessive FEC packets into the network. Thus, this paper proposes the implementation of an enhanced adaptive FEC (EnAFEC) mechanism for video transmission together with its validation process.There are two propositions in the EnAFEC enhancement, which include block length adaptation and implementation, and suitable smoothing factor value determination.The EnAFEC adjusts the FEC packets based on the wireless network condition so that excessive FEC packets can be reduced.The proposed enhancement is implemented in a simulation environment using the NS-2 network simulation.The simulation results show that EnAFEC generates less FEC packets than the other types of adaptive FEC (EAFEC and Mend FEC).In addition, a validation phase is also conducted to verify that the proposed enhancement is functioning correctly, and represents a real network situation.In the validation phase, the results obtained from the simulation are compared to the outputs of the other adaptive FEC mechanisms.The validation results show that the mechanism is successfully implemented in NS2 since the number of packet loss falls under the overlapping confidence intervals

    Packet prioritizing and delivering for multimedia streaming

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced
    corecore