9 research outputs found

    Reduction of coding artifacts in transform image coding by using local statistics of transform coefficients

    Get PDF
    Version of RecordPublishe

    An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    Get PDF
    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices

    Projection based edge recovery in low bit rate vector quantizers

    Get PDF
    Data compression is probably the single most important factor in every information service that is being visualized and proposed by engineers. The effectiveness of such services are dependent upon achievable compression of real time speech and video signals. Several approaches to signal encoding have been proposed and realized, each with its unique advantages and costs. Large compression ratios can only be achieved through lossy source encoding methods. One such method is Vector Quantization (VQ);The lossy nature of such encoders imply that the process of encoding is non invertible. At low bit rates, lossy compression with conventional decoders (realized as a simple \u27inverse\u27 of the encoder) result in huge subjective and objective distortions. Therefore, the thrust of research is to build \u27intelligent\u27 decoders that use a priori knowledge of the human visual properties in the decoding process. In such a scenario, signal decoding poses itself as a recovery problem based on some known a priori information. This is a study of the use of image recovery methods in lossy image encoding. Specifically, it is a study of the problems and costs associated with low bit rate coding of photographic grayscale images and recovery approaches to alleviate those problems;This study investigates the possibility of applying the theory of Convex Projections (CP) to problems in image recovery. The lossy compression method used as a target is the standard Vector Quantization (VQ) approach. In particular, the study looks at an implementation of VQ with a single-codebook encoding and multiple-codebook decoding. The method uses a convex projections (CP) based algorithm to iteratively project a coarsely encoded image onto a better codebook(s) during decoding, based on certain a priori constraints. The objective of this approach is to make encoding independent of edge regions of an image. This drastically reduces the number of edge vector representations at the encoder and hence results in fast searches. Such an approach will also work better on images outside the training sets since encoding is less dependent on edges

    Visual Data Compression for Multimedia Applications

    Get PDF
    The compression of visual information in the framework of multimedia applications is discussed. To this end, major approaches to compress still as well as moving pictures are reviewed. The most important objective in any compression algorithm is that of compression efficiency. High-compression coding of still pictures can be split into three categories: waveform, second-generation, and fractal coding techniques. Each coding approach introduces a different artifact at the target bit rates. The primary objective of most ongoing research in this field is to mask these artifacts as much as possible to the human visual system. Video-compression techniques have to deal with data enriched by one more component, namely, the temporal coordinate. Either compression techniques developed for still images can be generalized for three-dimensional signals (space and time) or a hybrid approach can be defined based on motion compensation. The video compression techniques can then be classified into the following four classes: waveform, object-based, model-based, and fractal coding techniques. This paper provides the reader with a tutorial on major visual data-compression techniques and a list of references for further information as the details of each metho

    Reduction of blocking artifacts using side information

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 95-96).Block-based image and video coding systems are used extensively in practice. In low bit-rate applications, however, they suffer from annoying discontinuities, called blocking artifacts. Prior research shows that incorporating systems that reduce blocking artifacts into codecs is useful because visual quality is improved. Existing methods reduce blocking artifacts by applying various post-processing techniques to the compressed image. Such methods require neither any modification to current encoders nor an increase in the bit-rate. This thesis examines a framework where blocking artifacts are reduced using side information transmitted from the encoder to the decoder. Using side information enables the use of the original image in deblocking, which improves performance. Furthermore, the computational burden at the decoder is reduced. The principal question that arises is whether the gains in performance of this choice can compensate for the increase in the bit-rate due to the transmission of side information. Experiments are carried out to answer this question with the following sample system: The encoder determines block boundaries that exhibit blocking artifacts as well as filters (from a predefined set of filters) that best deblock these block boundaries.(cont.) Then it transmits side information that conveys the determined block boundaries together with their selected filters to the decoder. The decoder uses the received side information to perform deblocking. The proposed sample system is compared against an ordinary coding system and a post-processing type deblocking system with the bit-rate of these systems being equal to the overall bit-rate (regular encoding bits + side information bits) of the proposed system. The results of the comparisons indicate that, both for images and video sequences, the proposed system can perform better in terms of both visual quality and PSNR for some range of coding bit-rates.by Fatih Kamisli.S.M

    Adaptive filtering techniques for acquisition noise and coding artifacts of digital pictures

    Get PDF
    The quality of digital pictures is often degraded by various processes (e.g, acquisition or capturing, compression, filtering process, transmission, etc). In digital image/video processing systems, random noise appearing in images is mainly generated during the capturing process; while the artifacts (or distortions) are generated in compression or filtering processes. This dissertation looks at digital image/video quality degradations with possible solution for post processing techniques for coding artifacts and acquisition noise reduction for images/videos. Three major issues associated with the image/video degradation are addressed in this work. The first issue is the temporal fluctuation artifact in digitally compressed videos. In the state-of-art video coding standard, H.264/AVC, temporal fluctuations are noticeable between intra picture frames or between an intra picture frame and neighbouring inter picture frames. To resolve this problem, a novel robust statistical temporal filtering technique is proposed. It utilises a re-descending robust statistical model with outlier rejection feature to reduce the temporal fluctuations while preserving picture details and motion sharpness. PSNR and sum of square difference (SSD) show improvement of proposed filters over other benchmark filters. Even for videos contain high motion, the proposed temporal filter shows good performances in fluctuation reduction and motion clarity preservation compared with other baseline temporal filters. The second issue concerns both the spatial and temporal artifacts (e.g, blocking, ringing, and temporal fluctuation artifacts) appearing in compressed video. To address this issue, a novel joint spatial and temporal filtering framework is constructed for artifacts reduction. Both the spatial and the temporal filters employ a re-descending robust statistical model (RRSM) in the filtering processes. The robust statistical spatial filter (RSSF) reduces spatial blocking and ringing artifacts whilst the robust statistical temporal filter (RSTF) suppresses the temporal fluctuations. Performance evaluations demonstrate that the proposed joint spatio-temporal filter is superior to H.264 loop filter in terms of spatial and temporal artifacts reduction and motion clarity preservation. The third issue is random noise, commonly modeled as mixed Gaussian and impulse noise (MGIN), which appears in image/video acquisition process. An effective method to estimate MGIN is through a robust estimator, median absolute deviation normalized (MADN). The MADN estimator is used to separate the MGIN model into impulse and additive Gaussian noise portion. Based on this estimation, the proposed filtering process is composed of a modified median filter for impulse noise reduction, and a DCT transform based denoising filter for additive Gaussian noise reduction. However, this DCT based denoising filter produces temporal fluctuations for videos. To solve this problem, a temporal filter is added to the filtering process. Therefore, another joint spatio-temporal filtering scheme is built to achieve the best visual quality of denoised videos. Extensive experiments show that the proposed joint spatio-temporal filtering scheme outperforms other benchmark filters in noise and distortions suppression

    Image enhancements for low-bitrate videocoding

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 71).by Brian C. Davison.M.Eng

    REDUCTION OF CODING ARTIFACTS IN TRANSFORM IMAGE CODING BY USING LOCAL STATISTICS OF TRANSFORM COEFFICIENTS

    No full text
    This paper proposes a new approach toreduce coding artifacts in transform image coding. We approach the problem in an estimation of each transform coefficient from the quantized data by using its local mean and variance. The proposed method can reduce much coding artifacts of low bit-rate coded images, and atthe same time guarantee that the resulting images satisfies the quantization error constraint

    Reduction Of Coding Artifacts In Transform Image Coding By Using Local Statistics Of Transform Coefficients

    No full text
    This paper proposes a new approach to reduce coding artifacts in transform image coding. We approach the problem in an estimation of each transform coefficient from the quantized data by using its local mean and variance. The proposed method can reduce much coding artifacts of low bit-rate coded images, and at the same time guarantee that the resulting images satisfies the quantization error constraint. 1. INTRODUCTION Block transform-based image coding offers a good tradeoff between bit rate and subjective image quality, and hence is the most widely used technique in image compression. Unfortunately, noise caused by the coarse quantization of transform coefficients is noticeable in a form of visible block boundaries when the compression ratio is sufficiently high. Various techniques had been proposed to remove blocking artifacts of low bitrate coded images. In order to make encoder efficient, most of them involves post-processing at the decoding side, rather than approaching the pro..
    corecore