720,607 research outputs found
Data compression system
A data compression system is described in which TV PCM data for each line scan is received in the form of a succession of multibit pixel words. All or selected bits of each word are compressed by providing difference values between successive pixel words and coding the difference values of a selected number of pixel words forming a block into a fundamental sequence (FS). The FS, based on its length and the number of words per block, is either transmitted as the compressed data or is used to generate a code FS or its complement is used to generate a code FS bar. When the code FS is generated, its length is compared with the original block PCM and only if the former is the shorter of the two is the code transmitted. Selected bits per pixel word may be compressed, while the remaining bits may be transmitted directly, or some of them may be omitted altogether
Hardware architecture for lossless image compression based on context-based modeling and arithmetic coding
In this paper we present a novel hardware architecture for context-based statistical lossless image compression, as part of a dynamically reconfigurable architecture for universal lossless compression. A gradient-adjusted prediction and context modeling algorithm is adapted to a pipelined scheme for low complexity and high throughput. Our proposed system improves image compression ratio while keeping low hardware complexity. This system is designed for a Xilinx Virtex4 FPGA core and optimized to achieve a 123 MHz clock frequency for real-time processing
Optimal modeling for complex system design
The article begins with a brief introduction to the theory describing optimal data compression systems and their performance. A brief outline is then given of a representative algorithm that employs these lessons for optimal data compression system design. The implications of rate-distortion theory for practical data compression system design is then described, followed by a description of the tensions between theoretical optimality and system practicality and a discussion of common tools used in current algorithms to resolve these tensions. Next, the generalization of rate-distortion principles to the design of optimal collections of models is presented. The discussion focuses initially on data compression systems, but later widens to describe how rate-distortion theory principles generalize to model design for a wide variety of modeling applications. The article ends with a discussion of the performance benefits to be achieved using the multiple-model design algorithms
Advanced inference in fuzzy systems by rule base compression
This paper describes a method for rule base compression of fuzzy systems. The method compresses a fuzzy system with an arbitrarily large number of rules into a smaller fuzzy system by removing the redundancy in the fuzzy rule base. As a result of this compression, the number of on-line operations during the fuzzy inference process is significantly reduced without compromising the solution. This rule base compression method outperforms significantly other known methods for fuzzy rule base reduction.Peer Reviewe
- …
