5 research outputs found

    Error resilient image transmission using T-codes and edge-embedding

    Get PDF
    Current image communication applications involve image transmission over noisy channels, where the image gets damaged. The loss of synchronization at the decoder due to these errors increases the damage in the reconstructed image. Our main goal in this research is to develop an algorithm that has the capability to detect errors, achieve synchronization and conceal errors.;In this thesis we studied the performance of T-codes in comparison with Huffman codes. We develop an algorithm for the selection of best T-code set. We have shown that T-codes exhibit better synchronization properties when compared to Huffman Codes. In this work we developed an algorithm that extracts edge patterns from each 8x8 block, classifies edge patterns into different classes. In this research we also propose a novel scrambling algorithm to hide edge pattern of a block into neighboring 8x8 blocks of the image. This scrambled hidden data is used in the detection of errors and concealment of errors. We also develop an algorithm to protect the hidden data from getting damaged in the course of transmission

    A support vector machine approach for detection and localization of transmission errors within standard H.263++ decoders

    Get PDF
    Wireless multimedia services are increasingly becoming popular boosting the need for better quality-of-experience (QoE) with minimal costs. The standard codecs employed by these systems remove spatio-temporal redundancies to minimize the bandwidth required. However, this increases the exposure of the system to transmission errors, thus presenting a significant degradation in perceptual quality of the reconstructed video sequences. A number of mechanisms were investigated in the past to make these codecs more robust against transmission errors. Nevertheless, these techniques achieved little success, forcing the transmission to be held at lower bit-error rates (BERs) to guarantee acceptable quality. This paper presents a novel solution to this problem based on the error detection capabilities of the transport protocols to identify potentially corrupted group-of-blocks (GOBs). The algorithm uses a support vector machine (SVM) at its core to localize visually impaired macroblocks (MBs) that require concealment within these GOBs. Hence, this method drastically reduces the region to be concealed compared to state-of-the-art error resilient strategies which assume a packet loss scenario. Testing on a standard H.263++ codec confirms that a significant gain in quality is achieved with error detection rates of 97.8% and peak signal-to-noise ratio (PSNR) gains of up to 5.33 dB. Moreover, most of the undetected errors provide minimal visual artifacts and are thus of little influence to the perceived quality of the reconstructed sequences.peer-reviewe

    Resilient Digital Video Transmission over Wireless Channels using Pixel-Level Artefact Detection Mechanisms

    Get PDF
    Recent advances in communications and video coding technology have brought multimedia communications into everyday life, where a variety of services and applications are being integrated within different devices such that multimedia content is provided everywhere and on any device. H.264/AVC provides a major advance on preceding video coding standards obtaining as much as twice the coding efficiency over these standards (Richardson I.E.G., 2003, Wiegand T. & Sullivan G.J., 2007). Furthermore, this new codec inserts video related information within network abstraction layer units (NALUs), which facilitates the transmission of H.264/AVC coded sequences over a variety of network environments (Stockhammer, T. & Hannuksela M.M., 2005) making it applicable for a broad range of applications such as TV broadcasting, mobile TV, video-on-demand, digital media storage, high definition TV, multimedia streaming and conversational applications. Real-time wireless conversational and broadcast applications are particularly challenging as, in general, reliable delivery cannot be guaranteed (Stockhammer, T. & Hannuksela M.M., 2005). The H.264/AVC standard specifies several error resilient strategies to minimise the effect of transmission errors on the perceptual quality of the reconstructed video sequences. However, these methods assume a packet-loss scenario where the receiver discards and conceals all the video information contained within a corrupted NALU packet. This implies that the error resilient methods adopted by the standard operate at a lower bound since not all the information contained within a corrupted NALU packet is un-utilizable (Stockhammer, T. et al., 2003).peer-reviewe

    Digital rights management techniques for H.264 video

    Get PDF
    This work aims to present a number of low-complexity digital rights management (DRM) methodologies for the H.264 standard. Initially, requirements to enforce DRM are analyzed and understood. Based on these requirements, a framework is constructed which puts forth different possibilities that can be explored to satisfy the objective. To implement computationally efficient DRM methods, watermarking and content based copy detection are then chosen as the preferred methodologies. The first approach is based on robust watermarking which modifies the DC residuals of 4Ă—4 macroblocks within I-frames. Robust watermarks are appropriate for content protection and proving ownership. Experimental results show that the technique exhibits encouraging rate-distortion (R-D) characteristics while at the same time being computationally efficient. The problem of content authentication is addressed with the help of two methodologies: irreversible and reversible watermarks. The first approach utilizes the highest frequency coefficient within 4Ă—4 blocks of the I-frames after CAVLC en- tropy encoding to embed a watermark. The technique was found to be very effect- ive in detecting tampering. The second approach applies the difference expansion (DE) method on IPCM macroblocks within P-frames to embed a high-capacity reversible watermark. Experiments prove the technique to be not only fragile and reversible but also exhibiting minimal variation in its R-D characteristics. The final methodology adopted to enforce DRM for H.264 video is based on the concept of signature generation and matching. Specific types of macroblocks within each predefined region of an I-, B- and P-frame are counted at regular intervals in a video clip and an ordinal matrix is constructed based on their count. The matrix is considered to be the signature of that video clip and is matched with longer video sequences to detect copies within them. Simulation results show that the matching methodology is capable of not only detecting copies but also its location within a longer video sequence. Performance analysis depict acceptable false positive and false negative rates and encouraging receiver operating charac- teristics. Finally, the time taken to match and locate copies is significantly low which makes it ideal for use in broadcast and streaming applications

    Watermarking of compressed multimedia using error-resilient VLCs

    No full text
    Abstract-Error-resilient variable length codes (VLCs) have been proposed to counter bit errors over error-prone channels. In this work we establish a linkage between channel coding and watermarking by observing that watermark bits are, in effect, intentional bit errors. Using a recently introduced resynchronizing VLC, we have developed a compressed-domain watermarking algorithm where the inherent errorresilient property of the code is exploited to implement lossless, oblivious watermarking. The algorithm is implemented on MPEG-2 video Keywords—watermarking, MPEG-2, error-resilient coding I
    corecore