335 research outputs found

    Maximum aposteriori joint source/channel coding

    Get PDF
    A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed

    Design of source coders and joint source/channel coders for noisy channels

    Get PDF
    A theory behind a proposed joint source/channel coding approach is developed and a variable rate design approach which provides substantial improvement over current joint source/channel coder designs is obtained. The Rice algorithm as applied to the output of the Gamma Ray Detector of the Mars Orbiter is evaluated. An alternative algorithm is obtained which outperforms the Rice both in terms of data compression and noisy channel performance. A high-fidelity low-rate image compression algorithm is developed which provides almost distortionless compression of high resolution images

    Watermarking on Compressed Image: A New Perspective

    Get PDF

    A constrained joint source/channel coder design and vector quantization of nonstationary sources

    Get PDF
    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm

    Multimedia Protection using Content and Embedded Fingerprints

    Get PDF
    Improved digital connectivity has made the Internet an important medium for multimedia distribution and consumption in recent years. At the same time, this increased proliferation of multimedia has raised significant challenges in secure multimedia distribution and intellectual property protection. This dissertation examines two complementary aspects of the multimedia protection problem that utilize content fingerprints and embedded collusion-resistant fingerprints. The first aspect considered is the automated identification of multimedia using content fingerprints, which is emerging as an important tool for detecting copyright violations on user generated content websites. A content fingerprint is a compact identifier that captures robust and distinctive properties of multimedia content, which can be used for uniquely identifying the multimedia object. In this dissertation, we describe a modular framework for theoretical modeling and analysis of content fingerprinting techniques. Based on this framework, we analyze the impact of distortions in the features on the corresponding fingerprints and also consider the problem of designing a suitable quantizer for encoding the features in order to improve the identification accuracy. The interaction between the fingerprint designer and a malicious adversary seeking to evade detection is studied under a game-theoretic framework and optimal strategies for both parties are derived. We then focus on analyzing and understanding the matching process at the fingerprint level. Models for fingerprints with different types of correlations are developed and the identification accuracy under each model is examined. Through this analysis we obtain useful guidelines for designing practical systems and also uncover connections to other areas of research. A complementary problem considered in this dissertation concerns tracing the users responsible for unauthorized redistribution of multimedia. Collusion-resistant fingerprints, which are signals that uniquely identify the recipient, are proactively embedded in the multimedia before redistribution and can be used for identifying the malicious users. We study the problem of designing collusion resistant fingerprints for embedding in compressed multimedia. Our study indicates that directly adapting traditional fingerprinting techniques to this new setting of compressed multimedia results in low collusion resistance. To withstand attacks, we propose an anti-collusion dithering technique for embedding fingerprints that significantly improves the collusion resistance compared to traditional fingerprints

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Image Compression & Transmission through Digital Communication System

    Get PDF
    Image Compression addresses the problem of reducing the amount of data required to represent the digital image. Compression is achieved by the removal of one or more of three basic data redundancies: (1) Coding redundancy, which is present when less than optimal (i.e. the smallest length) code words are used; (2) Interpixel redundancy, which results from correlations between the pixels of an image; &/or (3) psycho visual redundancy which is due to data that is ignored by the human visual system (i.e. visually nonessential information). Huffman codes contain the smallest possible number of code symbols (e.g., bits) per source symbol (e.g., grey level value) subject to the constraint that the source symbols are coded one at a time. So, Huffman coding & Shannon Coding , which remove coding redundancies , when combined with the technique of image compression using Discrete Cosine Transform (DCT) & Discrete Wavelet Transform (DWT) helps in compressing the image data to a very good extent. For the efficient transmission of an image across a channel, source coding in the form of image compression at the transmitter side & the image recovery at the receiver side are the integral process involved in any digital communication system. Other processes like channel encoding, signal modulation at the transmitter side & their corresponding inverse processes at the receiver side along with the channel equalization help greatly in minimizing the bit error rate due to the effect of noise & bandwidth limitations (or the channel capacity) of the channel

    Robust decoder-based error control strategy for recovery of H.264/AVC video content

    Get PDF
    Real-time wireless conversational and broadcasting multimedia applications offer particular transmission challenges as reliable content delivery cannot be guaranteed. The undelivered and erroneous content causes significant degradation in quality of experience. The H.264/AVC standard includes several error resilient tools to mitigate this effect on video quality. However, the methods implemented by the standard are based on a packet-loss scenario, where corrupted slices are dropped and the lost information concealed. Partially damaged slices still contain valuable information that can be used to enhance the quality of the recovered video. This study presents a novel error recovery solution that relies on a joint source-channel decoder to recover only feasible slices. A major advantage of this decoder-based strategy is that it grants additional robustness while keeping the same transmission data rate. Simulation results show that the proposed approach manages to completely recover 30.79% of the corrupted slices. This provides frame-by-frame peak signal-to-noise ratio (PSNR) gains of up to 18.1%dB, a result which, to the knowledge of the authors, is superior to all other joint source-channel decoding methods found in literature. Furthermore, this error resilient strategy can be combined with other error resilient tools adopted by the standard to enhance their performance.peer-reviewe
    corecore