26 research outputs found

    A robust error detection mechanism for H.264/AVC coded video sequences based on support vector machines

    Get PDF
    Current trends in wireless communications provide fast and location-independent access to multimedia services. Due to its high compression efficiency, H.264/AVC is expected to become the dominant underlying technology in the delivery of future wireless video applications. The error resilient mechanisms adopted by this standard alleviate the problem of spatio-temporal propagation of visual artifacts caused by transmission errors by dropping and concealing all macroblocks (MBs) contained within corrupted segments, including uncorrupted MBs. Concealing these uncorrupted MBs generally causes a reduction in quality of the reconstructed video sequence.peer-reviewe

    Resilient Digital Video Transmission over Wireless Channels using Pixel-Level Artefact Detection Mechanisms

    Get PDF
    Recent advances in communications and video coding technology have brought multimedia communications into everyday life, where a variety of services and applications are being integrated within different devices such that multimedia content is provided everywhere and on any device. H.264/AVC provides a major advance on preceding video coding standards obtaining as much as twice the coding efficiency over these standards (Richardson I.E.G., 2003, Wiegand T. & Sullivan G.J., 2007). Furthermore, this new codec inserts video related information within network abstraction layer units (NALUs), which facilitates the transmission of H.264/AVC coded sequences over a variety of network environments (Stockhammer, T. & Hannuksela M.M., 2005) making it applicable for a broad range of applications such as TV broadcasting, mobile TV, video-on-demand, digital media storage, high definition TV, multimedia streaming and conversational applications. Real-time wireless conversational and broadcast applications are particularly challenging as, in general, reliable delivery cannot be guaranteed (Stockhammer, T. & Hannuksela M.M., 2005). The H.264/AVC standard specifies several error resilient strategies to minimise the effect of transmission errors on the perceptual quality of the reconstructed video sequences. However, these methods assume a packet-loss scenario where the receiver discards and conceals all the video information contained within a corrupted NALU packet. This implies that the error resilient methods adopted by the standard operate at a lower bound since not all the information contained within a corrupted NALU packet is un-utilizable (Stockhammer, T. et al., 2003).peer-reviewe

    Adaptive robust video broadcast via satellite

    Get PDF
    © 2016 Springer Science+Business Media New YorkWith increasing demand for multimedia content over channels with limited bandwidth and heavy packet losses, higher coding efficiency and stronger error resiliency is required more than ever before. Both the coding efficiency and error resiliency are two opposing processes that require appropriate balancing. On the source encoding side the video encoder H.264/AVC can provide higher compression with strong error resiliency, while on the channel error correction coding side the raptor code has proven its effectiveness, with only modest overhead required for the recovery of lost data. This paper compares the efficiency and overhead of both the raptor codes and the error resiliency techniques of video standards so that both can be balanced for better compression and quality. The result is also improved by confining the robust stream to the period of poor channel conditions by adaptively switching between the video streams using switching frames introduced in H.264/AVC. In this case the video stream is initially transmitted without error resiliency assuming the channel to be completely error free, and then the robustness is increased based on the channel conditions and/or user demand. The results showed that although switching can increase the peak signal to noise ratio in the presence of losses but at the same time its excessive repetition can be irritating to the viewers. Therefore to evaluate the perceptual quality of the video streams and to find the optimum number of switching during a session, these streams were scored by different viewers for quality of enhancement. The results of the proposed scheme show an increase of 3 to 4 dB in peak signal to noise ratio with acceptable quality of enhancement

    A support vector machine approach for detection and localization of transmission errors within standard H.263++ decoders

    Get PDF
    Wireless multimedia services are increasingly becoming popular boosting the need for better quality-of-experience (QoE) with minimal costs. The standard codecs employed by these systems remove spatio-temporal redundancies to minimize the bandwidth required. However, this increases the exposure of the system to transmission errors, thus presenting a significant degradation in perceptual quality of the reconstructed video sequences. A number of mechanisms were investigated in the past to make these codecs more robust against transmission errors. Nevertheless, these techniques achieved little success, forcing the transmission to be held at lower bit-error rates (BERs) to guarantee acceptable quality. This paper presents a novel solution to this problem based on the error detection capabilities of the transport protocols to identify potentially corrupted group-of-blocks (GOBs). The algorithm uses a support vector machine (SVM) at its core to localize visually impaired macroblocks (MBs) that require concealment within these GOBs. Hence, this method drastically reduces the region to be concealed compared to state-of-the-art error resilient strategies which assume a packet loss scenario. Testing on a standard H.263++ codec confirms that a significant gain in quality is achieved with error detection rates of 97.8% and peak signal-to-noise ratio (PSNR) gains of up to 5.33 dB. Moreover, most of the undetected errors provide minimal visual artifacts and are thus of little influence to the perceived quality of the reconstructed sequences.peer-reviewe

    Novel source coding methods for optimising real time video codecs.

    Get PDF
    The quality of the decoded video is affected by errors occurring in the various layers of the protocol stack. In this thesis, disjoint errors occurring in different layers of the protocol stack are investigated with the primary objective of demonstrating the flexibility of the source coding layer. In the first part of the thesis, the errors occurring in the editing layer, due to the coexistence of different video standards in the broadcast market, are addressed. The problems investigated are ‘Field Reversal’ and ‘Mixed Pulldown’. Field Reversal is caused when the interlaced video fields are not shown in the same order as they were captured. This results in a shaky video display, as the fields are not displayed in chronological order. Additionally, Mixed Pulldown occurs when the video frame-rate is up-sampled and down-sampled, when digitised film material is being standardised to suit standard televisions. Novel image processing algorithms are proposed to solve these problems from the source coding layer. In the second part of the thesis, the errors occurring in the transmission layer due to data corruption are addressed. The usage of block level source error-resilient methods over bit level channel coding methods are investigated and improvements are suggested. The secondary objective of the thesis is to optimise the proposed algorithm’s architecture for real-time implementation, since the problems are of a commercial nature. The Field Reversal and Mixed Pulldown algorithms were tested in real time at MTV (Music Television) and are made available commercially through ‘Cerify’, a Linux-based media testing box manufactured by Tektronix Plc. The channel error-resilient algorithms were tested in a laboratory environment using Matlab and performance improvements are obtained

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Multiple Description Coding Using Data Hiding and Regions of Interest for Broadcasting Applications

    Get PDF
    We propose an innovative scheme for multiple description coding (MDC) with regions of interest (ROI) support to be adopted in high-quality television. The scheme proposes to split the stream into two separate descriptors and to preserve the quality of the region of interest, even in case one descriptor is completely lost. The residual part of the frame (the background) is instead modeled through a checkerboard pattern, alternating the strength of the quantization. The decoder is provided with the necessary side-information to reconstruct the frame properly, namely, the ROI parameters and location, via a suitable data hiding procedure. Using data hiding, reconstruction parameters are embedded in the transform coefficients, thus allowing an improvement in PSNR of the single descriptions at the cost of a negligible overhead. To demonstrate its effectiveness, the algorithm has been implemented in two different scenarios, using the reference H.264/AVC codec and an MJPEG framework to evaluate the performance in absence of motion-compensated frames on 720p video sequences

    Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    Get PDF

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    ERROR CORRECTION CODE-BASED EMBEDDING IN ADAPTIVE RATE WIRELESS COMMUNICATION SYSTEMS

    Get PDF
    In this dissertation, we investigated the methods for development of embedded channels within error correction mechanisms utilized to support adaptive rate communication systems. We developed an error correction code-based embedding scheme suitable for application in modern wireless data communication standards. We specifically implemented the scheme for both low-density parity check block codes and binary convolutional codes. While error correction code-based information hiding has been previously presented in literature, we sought to take advantage of the fact that these wireless systems have the ability to change their modulation and coding rates in response to changing channel conditions. We utilized this functionality to incorporate knowledge of the channel state into the scheme, which led to an increase in embedding capacity. We conducted extensive simulations to establish the performance of our embedding methodologies. Results from these simulations enabled the development of models to characterize the behavior of the embedded channels and identify sources of distortion in the underlying communication system. Finally, we developed expressions to define limitations on the capacity of these channels subject to a variety of constraints, including the selected modulation type and coding rate of the communication system, the current channel state, and the specific embedding implementation.Commander, United States NavyApproved for public release; distribution is unlimited
    corecore