1,454 research outputs found

    Error and Congestion Resilient Video Streaming over Broadband Wireless

    Get PDF
    In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC) codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine

    Error Resilience in Heterogeneous Visual Communications

    Get PDF
    A critical and challenging aspect of visual communication technologies is to immunize visual information to transmission errors. In order to effectively protect visual content against transmission errors, various kinds of heterogeneities involved in multimedia delivery need to be considered, such as compressed stream characteristics heterogeneity, channel condition heterogeneity, multi-user and multi-hop heterogeneity. The main theme of this dissertation is to explore these heterogeneities involved in error-resilient visual communications to deliver different visual content over heterogeneous networks with good visual quality. Concurrently transmitting multiple video streams in error-prone environment faces many challenges, such as video content characteristics are heterogeneous, transmission bandwidth is limited, and the user device capabilities vary. These challenges prompt the need for an integrated approach of error protection and resource allocation. One motivation of this dissertation is to develop such an integrated approach for an emerging application of multi-stream video aggregation, i.e. multi-point video conferencing. We propose a distributed multi-point video conferencing system that employs packet division multiplexing access (PDMA)-based error protection and resource allocation, and explore the multi-hop awareness to deliver good and fair visual quality of video streams to end users. When the transport layer mechanism, such as forward error correction (FEC), cannot provide sufficient error protection on the payload stream, the unrecovered transmission errors may lead to visual distortions at the decoder. In order to mitigate the visual distortions caused by the unrecovered errors, concealment techniques can be applied at the decoder to provide an approximation of the original content. Due to image characteristics heterogeneity, different concealment approaches are necessary to accommodate different nature of the lost image content. We address this heterogeneity issue and propose to apply a classification framework that adaptively selects the suitable error concealment technique for each damaged image area. The analysis and extensive experimental results in this dissertation demonstrate that the proposed integrated approach of FEC and resource allocation as well as the new classification-based error concealment approach can significantly outperform conventional error-resilient approaches

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Video Streaming over Vehicular Ad Hoc Networks: A Comparative Study and Future Perspectives

    Get PDF
    Vehicular  Ad Hoc Network  (VANET) is emerged as an important research area that provides ubiquitous short-range connectivity among moving vehicles.  This network enables efficient traffic safety and infotainment applications. One of the promising applications is video transmission in vehicle-to-vehicle or vehicle-to-infrastructure environments.  But, video streaming over vehicular environment is a daunting task due to high movement of vehicles. This paper presents a survey on state-of-arts of video streaming over VANET. Furthermore, taxonomy of vehicular video transmission is highlighted in this paper with special focus on significant applications and their requirements with challenges, video content sharing, multi-source video streaming and video broadcast services. The comparative study of the paper compares the video streaming schemes based on type of error resilient technique, objective of study, summary of their study, the utilized simulator and the type of video sharing.  Lastly, we discussed the open issues and research directions related to video communication over VANET

    Enhanced Multimedia Exchanges over the Internet

    Get PDF
    Although the Internet was not originally designed for exchanging multimedia streams, consumers heavily depend on it for audiovisual data delivery. The intermittent nature of multimedia traffic, the unguaranteed underlying communication infrastructure, and dynamic user behavior collectively result in the degradation of Quality-of-Service (QoS) and Quality-of-Experience (QoE) perceived by end-users. Consequently, the volume of signalling messages is inevitably increased to compensate for the degradation of the desired service qualities. Improved multimedia services could leverage adaptive streaming as well as blockchain-based solutions to enhance media-rich experiences over the Internet at the cost of increased signalling volume. Many recent studies in the literature provide signalling reduction and blockchain-based methods for authenticated media access over the Internet while utilizing resources quasi-efficiently. To further increase the efficiency of multimedia communications, novel signalling overhead and content access latency reduction solutions are investigated in this dissertation including: (1) the first two research topics utilize steganography to reduce signalling bandwidth utilization while increasing the capacity of the multimedia network; and (2) the third research topic utilizes multimedia content access request management schemes to guarantee throughput values for servicing users, end-devices, and the network. Signalling of multimedia streaming is generated at every layer of the communication protocol stack; At the highest layer, segment requests are generated, and at the lower layers, byte tracking messages are exchanged. Through leveraging steganography, essential signalling information is encoded within multimedia payloads to reduce the amount of resources consumed by non-payload data. The first steganographic solution hides signalling messages within multimedia payloads, thereby freeing intermediate node buffers from queuing non-payload packets. Consequently, source nodes are capable of delivering control information to receiving nodes at no additional network overhead. A utility function is designed to minimize the volume of overhead exchanged while minimizing visual artifacts. Therefore, the proposed scheme is designed to leverage the fidelity of the multimedia stream to reduce the largest amount of control overhead with the lowest negative visual impact. The second steganographic solution enables protocol translation through embedding packet header information within payload data to alternatively utilize lightweight headers. The protocol translator leverages a proposed utility function to enable the maximum number of translations while maintaining QoS and QoE requirements in terms of packet throughput and playback bit-rate. As the number of multimedia users and sources increases, decentralized content access and management over a blockchain-based system is inevitable. Blockchain technologies suffer from large processing latencies; consequently reducing the throughput of a multimedia network. Reducing blockchain-based access latencies is therefore essential to maintaining a decentralized scalable model with seamless functionality and efficient utilization of resources. Adapting blockchains to feeless applications will then port the utility of ledger-based networks to audiovisual applications in a faultless manner. The proposed transaction processing scheme will enable ledger maintainers in sustaining desired throughputs necessary for delivering expected QoS and QoE values for decentralized audiovisual platforms. A block slicing algorithm is designed to ensure that the ledger maintenance strategy is benefiting the operations of the blockchain-based multimedia network. Using the proposed algorithm, the throughput and latency of operations within the multimedia network are then maintained at a desired level

    Video over DSL with LDGM Codes for Interactive Applications

    Get PDF
    Digital Subscriber Line (DSL) network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC), calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM) codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL) FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Error resilience and concealment techniques for high-efficiency video coding

    Get PDF
    This thesis investigates the problem of robust coding and error concealment in High Efficiency Video Coding (HEVC). After a review of the current state of the art, a simulation study about error robustness, revealed that the HEVC has weak protection against network losses with significant impact on video quality degradation. Based on this evidence, the first contribution of this work is a new method to reduce the temporal dependencies between motion vectors, by improving the decoded video quality without compromising the compression efficiency. The second contribution of this thesis is a two-stage approach for reducing the mismatch of temporal predictions in case of video streams received with errors or lost data. At the encoding stage, the reference pictures are dynamically distributed based on a constrained Lagrangian rate-distortion optimization to reduce the number of predictions from a single reference. At the streaming stage, a prioritization algorithm, based on spatial dependencies, selects a reduced set of motion vectors to be transmitted, as side information, to reduce mismatched motion predictions at the decoder. The problem of error concealment-aware video coding is also investigated to enhance the overall error robustness. A new approach based on scalable coding and optimally error concealment selection is proposed, where the optimal error concealment modes are found by simulating transmission losses, followed by a saliency-weighted optimisation. Moreover, recovery residual information is encoded using a rate-controlled enhancement layer. Both are transmitted to the decoder to be used in case of data loss. Finally, an adaptive error resilience scheme is proposed to dynamically predict the video stream that achieves the highest decoded quality for a particular loss case. A neural network selects among the various video streams, encoded with different levels of compression efficiency and error protection, based on information from the video signal, the coded stream and the transmission network. Overall, the new robust video coding methods investigated in this thesis yield consistent quality gains in comparison with other existing methods and also the ones implemented in the HEVC reference software. Furthermore, the trade-off between coding efficiency and error robustness is also better in the proposed methods

    Copyright protection of scalar and multimedia sensor network data using digital watermarking

    Get PDF
    This thesis records the research on watermarking techniques to address the issue of copyright protection of the scalar data in WSNs and image data in WMSNs, in order to ensure that the proprietary information remains safe between the sensor nodes in both. The first objective is to develop LKR watermarking technique for the copyright protection of scalar data in WSNs. The second objective is to develop GPKR watermarking technique for copyright protection of image data in WMSN
    • …
    corecore