423 research outputs found

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using Neural Network

    Get PDF
    The goal of this research was to develop a neural network that will produce considerable improvement in the quality of JPEG compressed images, irrespective of compression level present in the images. In order to develop a computationally efficient algorithm for reducing blocky and Gibbs oscillation artifacts from JPEG compressed images, we integrated artificial intelligence to remove blocky and Gibbs oscillation artifacts. In this approach, alpha blend filter [7] was used to post process JPEG compressed images to reduce noise and artifacts without losing image details. Here alpha blending was controlled by a limit factor that considers the amount of compression present, and any local information derived from Prewitt filter application in the input JPEG image. The outcome of modified alpha blend was improved by a trained neural network and compared with various other published works [7][9][11][14][20][23][30][32][33][35][37] where authors used post compression filtering methods

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using An Adaptive Filtering Algorithm

    Get PDF
    The aim of this research was to develop an algorithm that will produce a considerable improvement in the quality of JPEG images, by removing blocking and ringing artifacts, irrespective of the level of compression present in the image. We review multiple published related works, and finally present a computationally efficient algorithm for reducing the blocky and Gibbs oscillation artifacts commonly present in JPEG compressed images. The algorithm alpha-blends a smoothed version of the image with the original image; however, the blending is controlled by a limit factor that considers the amount of compression present and any local edge information derived from the application of a Prewitt filter. In addition, the actual value of the blending coefficient (α) is derived from the local Mean Structural Similarity Index Measure (MSSIM) which is also adjusted by a factor that also considers the amount of compression present. We also present our results as well as the results for a variety of other papers whose authors used other post compression filtering methods

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    GPU acceleration of predictive partitioned vector quantization for ultraspectral sounder data compression

    Get PDF
    [[abstract]]For the large-volume ultraspectral sounder data, compression is desirable to save storage space and transmission time. To retrieve the geophysical paramters without losing precision the ultraspectral sounder data compression has to be lossless. Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved by using GPU. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding. Two most time consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting the data parallel characteristics of these two stages, a spatial division design shows a speedup of 72x in our four-GPU-based implementation of the PPVQ compression scheme.[[notice]]èŁœæ­ŁćźŒç•ą[[journaltype]]ćœ‹ć€–[[incitationindex]]SCI[[booktype]]çŽ™æœŹ[[countrycodes]]US

    A study of data coding technology developments in the 1980-1985 time frame, volume 2

    Get PDF
    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals

    Image compression techniques using vector quantization

    Get PDF

    Image Segmentation using Human Visual System Properties with Applications in Image Compression

    Get PDF
    In order to represent a digital image, a very large number of bits is required. For example, a 512 X 512 pixel, 256 gray level image requires over two million bits. This large number of bits is a substantial drawback when it is necessary to store or transmit a digital image. Image compression, often referred to as image coding, attempts to reduce the number of bits used to represent an image, while keeping the degradation in the decoded image to a minimum. One approach to image compression is segmentation-based image compression. The image to be compressed is segmented, i.e. the pixels in the image are divided into mutually exclusive spatial regions based on some criteria. Once the image has been segmented, information is extracted describing the shapes and interiors of the image segments. Compression is achieved by efficiently representing the image segments. In this thesis we propose an image segmentation technique which is based on centroid-linkage region growing, and takes advantage of human visual system (HVS) properties. We systematically determine through subjective experiments the parameters for our segmentation algorithm which produce the most visually pleasing segmented images, and demonstrate the effectiveness of our method. We also propose a method for the quantization of segmented images based on HVS contrast sensitivity, arid investigate the effect of quantization on segmented images

    Simple high-quality lossy image coding scheme

    Get PDF
    A simple yet efficient image data compression method is presented. This method is based on coding only those segments of the image that are perceptually significant to the reconstruction of the image. Sequences of image pixels whose gray-level differences from the pixels of the previous row exceed two prespecified thresholds are considered significant. These pixels are coded using a differential pulse code modulation scheme that uses a 15-level recursively indexed nonuniform quantizer for the first pixel in a segment and a 7-level recursively indexed nonuniform quantizer for all other pixels in the segment. The quantizer outputs are Huffman coded. Simulation results show that this scheme can obtain subjectively satisfactory reconstructed images at low bit rates. It is also computationally very simple, which makes it amenable to fast implementation

    Analysis of the impact of data compression on condition monitoring algorithms for ball screws

    Get PDF
    The overall equipment effectiveness (OEE) is a management ratio to evaluate the added value of machine tools. Unplanned machine downtime reduces the operational availability and therefore, the OEE. Increased machine costs are the consequence. An important cause of unplanned machine downtimes is the total failure of ball screws of the feed axes due to wear. Therefore, monitoring of the condition of ball screws is important. Common concepts rely on high-frequency acceleration sensors from external control systems to detect a change of the condition. For trend and detailed damage analysis, large amounts of data are generated and stored over a long time period (>5 years), resulting in corresponding data storage costs. Additional axes or machine tools increase the data volume further, adding to the total storage costs. To minimize these costs, data compression or source coding has to be applied. To achieve maximum compression ratios, lossy coding algorithms have to be used, which introduce distortion in a signal. In this work, the influence of lossy coding algorithms on a condition monitoring algorithm (CMA) using acceleration signals is investigated. The CMA is based on principal component analysis and uses 17 features such as standard deviation to predict the preload condition of a ball screw. It is shown that bit rate reduction through lossy compression algorithms is possible without affecting the condition monitoring - as long as the compression algorithm is known. In contrast, an unknown compression algorithm reduces the classification accuracy of condition monitoring by about 20 % when coding with a quantizer resolution of 4 bit/sample
    • 

    corecore