301 research outputs found
Compressed/reconstructed test images for CRAF/Cassini
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity
Hybrid compression of video with graphics in DTV communication systems
Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video communication chain between content provider and broadcaster and locally, in the DTV receiver, proprietary video-graphics compression schemes can be used to enable more efficient transmission/storage of mixed video and graphics signals. For example, in the DTV receiver case this will lead to a significant memory-cost reduction. To preserve a high overall image quality, the video and graphics data require independent coding systems, matched with their specific visual and statistical properties. We introduce various efficient algorithms that support both the lossless (contour, runlength and arithmetic coding) and the lossy (block predictive coding) compression of graphics data. If the graphics data are a-priori mixed with video and the graphics position is unknown at compression time, an accurate detection mechanism is applied to distinguish the two signals, such that independent coding algorithms can be employed for each data-type. In the DTV memory-reduction scenario, an overall bit-rate control completes the system, ensuring a fixed compression factor of 2-3 per frame without sacrificing the quality of the graphic
A joint coding concept for runlength and charge-limited channels
By making the conventional (d,k) constraint time dependent as a function of the channel process, the wide sense RLL channel has been defined. With the help of the new concept several existing constraints can be described alternatively and many new ones can be constructed. A bit stuff algorithm is suggested for coding wide sense RLL channels. We determine the rate of the bit stuff algorithm as the function of the stuffing probability. We present a few examples for calculating the rate of different constrained codes complying with the newly introduced constraint
Group Testing with Runlength Constraints for Topological Molecular Storage
Motivated by applications in topological DNA-based data storage, we introduce
and study a novel setting of Non-Adaptive Group Testing (NAGT) with runlength
constraints on the columns of the test matrix, in the sense that any two 1's
must be separated by a run of at least d 0's. We describe and analyze a
probabilistic construction of a runlength-constrained scheme in the zero-error
and vanishing error settings, and show that the number of tests required by
this construction is optimal up to logarithmic factors in the runlength
constraint d and the number of defectives k in both cases. Surprisingly, our
results show that runlength-constrained NAGT is not more demanding than
unconstrained NAGT when d=O(k), and that for almost all choices of d and k it
is not more demanding than NAGT with a column Hamming weight constraint only.
Towards obtaining runlength-constrained Quantitative NAGT (QNAGT) schemes with
good parameters, we also provide lower bounds for this setting and a nearly
optimal probabilistic construction of a QNAGT scheme with a column Hamming
weight constraint
Implementation issues in source coding
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated
Stack-run adaptive wavelet image compression
We report on the development of an adaptive wavelet image coder based on stack-run representation of the quantized coefficients. The coder works by selecting an optimal wavelet packet basis for the given image and encoding the quantization indices for significant coefficients and zero runs between coefficients using a 4-ary arithmetic coder. Due to the fact that our coder exploits the redundancies present within individual subbands, its addressing complexity is much lower than that of the wavelet zerotree coding algorithms. Experimental results show coding gains of up to 1:4dB over the benchmark wavelet coding algorithm
Comparison of CELP speech coder with a wavelet method
This thesis compares the speech quality of Code Excited Linear Predictor (CELP, Federal Standard 1016) speech coder with a new wavelet method to compress speech. The performances of both are compared by performing subjective listening tests. The test signals used are clean signals (i.e. with no background noise), speech signals with room noise and speech signals with artificial noise added. Results indicate that for clean signals and signals with predominantly voiced components the CELP standard performs better than the wavelet method but for signals with room noise the wavelet method performs much better than the CELP. For signals with artificial noise added, the results are mixed depending on the level of artificial noise added with CELP performing better for low level noise added signals and the wavelet method performing better for higher noise levels
A high-speed distortionless predictive image-compression scheme
A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source
A new adaptive interframe transform coding using directional classification
Version of RecordPublishe
Wavelet-based distributed source coding of video
Publication in the conference proceedings of EUSIPCO, Antalya, Turkey, 200
- …