4,666 research outputs found

    Model for Estimation of Bounds in Digital Coding of Seabed Images

    Get PDF
    This paper proposes the novel model for estimation of bounds in digital coding of images. Entropy coding of images is exploited to measure the useful information content of the data. The bit rate achieved by reversible compression using the rate-distortion theory approach takes into account the contribution of the observation noise and the intrinsic information of hypothetical noise-free image. Assuming the Laplacian probability density function of the quantizer input signal, SQNR gains are calculated for image predictive coding system with non-adaptive quantizer for white and correlated noise, respectively. The proposed model is evaluated on seabed images. However, model presented in this paper can be applied to any signal with Laplacian distribution

    Hybrid LZW compression

    Get PDF
    The Science Data Management and Science Payload Operations subpanel reports from the NASA Conference on Scientific Data Compression (Snowbird, Utah in 1988) indicate the need for both lossless and lossy image data compression systems. The ranges developed by the subpanel suggest ratios of 2:1 to 4:1 for lossless coding and 2:1 to 6:1 for lossy predictive coding. For the NASA Freedom Science Video Processing Facility it would be highly desirable to implement one baseline compression system which would meet both of these criteria. Presented here is such a system, utilizing an LZW hybrid coding scheme which is adaptable to either type of compression. Simulation results are presented with the hybrid LZW algorithm operating in each of its modes

    Virtually Lossless Compression of Astrophysical Images

    Get PDF
    We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM) scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.). The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community

    Quality criteria benchmark for hyperspectral imagery

    Get PDF
    Hyperspectral data appear to be of a growing interest over the past few years. However, applications for hyperspectral data are still in their infancy as handling the significant size of the data presents a challenge for the user community. Efficient compression techniques are required, and lossy compression, specifically, will have a role to play, provided its impact on remote sensing applications remains insignificant. To assess the data quality, suitable distortion measures relevant to end-user applications are required. Quality criteria are also of a major interest for the conception and development of new sensors to define their requirements and specifications. This paper proposes a method to evaluate quality criteria in the context of hyperspectral images. The purpose is to provide quality criteria relevant to the impact of degradations on several classification applications. Different quality criteria are considered. Some are traditionnally used in image and video coding and are adapted here to hyperspectral images. Others are specific to hyperspectral data.We also propose the adaptation of two advanced criteria in the presence of different simulated degradations on AVIRIS hyperspectral images. Finally, five criteria are selected to give an accurate representation of the nature and the level of the degradation affecting hyperspectral data

    RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding

    Full text link
    We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and Games (I3D '19
    • 

    corecore