10,483 research outputs found

    Optimal modeling for complex system design

    Get PDF
    The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms

    Hyperspectral image compression : adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding

    Get PDF
    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties

    On the rate-distortion performance and computational efficiency of the Karhunen-Loeve transform for lossy data compression

    Get PDF
    We examine the rate-distortion performance and computational complexity of linear transforms for lossy data compression. The goal is to better understand the performance/complexity tradeoffs associated with using the Karhunen-Loeve transform (KLT) and its fast approximations. Since the optimal transform for transform coding is unknown in general, we investigate the performance penalties associated with using the KLT by examining cases where the KLT fails, developing a new transform that corrects the KLT's failures in those examples, and then empirically testing the performance difference between this new transform and the KLT. Experiments demonstrate that while the worst KLT can yield transform coding performance at least 3 dB worse than that of alternative block transforms, the performance penalty associated with using the KLT on real data sets seems to be significantly smaller, giving at most 0.5 dB difference in our experiments. The KLT and its fast variations studied here range in complexity requirements from O(n^2) to O(n log n) in coding vectors of dimension n. We empirically investigate the rate-distortion performance tradeoffs associated with traversing this range of options. For example, an algorithm with complexity O(n^3/2) and memory O(n) gives 0.4 dB performance loss relative to the full KLT in our image compression experiment

    Non-local Attention Optimized Deep Image Compression

    Full text link
    This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics
    corecore