2,053 research outputs found

    Data compression for satellite images

    Get PDF
    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes

    Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    Get PDF
    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Study of information transfer optimization for communication satellites

    Get PDF
    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described

    a brief overview

    Get PDF
    We give a brief survey of the framework, relevant issues and existing methods in the field of digital image compression. Accept for section 3 where additional references are given, we basically follow the exposition of the field as presented in: Digital Pictures - Representation and Compression Arun. N. Netravali and Barry G. Haskell Bell Laboratories Murray Hill, New Jersey Plenum Press, London and New York 1988. All figures that contain page numberings are taken from the above

    Signal processing techniques for mobile multimedia systems

    Get PDF
    Recent trends in wireless communication systems show a significant demand for the delivery of multimedia services and applications over mobile networks - mobile multimedia - like video telephony, multimedia messaging, mobile gaming, interactive and streaming video, etc. However, despite the ongoing development of key communication technologies that support these applications, the communication resources and bandwidth available to wireless/mobile radio systems are often severely limited. It is well known, that these bottlenecks are inherently due to the processing capabilities of mobile transmission systems, and the time-varying nature of wireless channel conditions and propagation environments. Therefore, new ways of processing and transmitting multimedia data over mobile radio channels have become essential which is the principal focus of this thesis. In this work, the performance and suitability of various signal processing techniques and transmission strategies in the application of multimedia data over wireless/mobile radio links are investigated. The proposed transmission systems for multimedia communication employ different data encoding schemes which include source coding in the wavelet domain, transmit diversity coding (space-time coding), and adaptive antenna beamforming (eigenbeamforming). By integrating these techniques into a robust communication system, the quality (SNR, etc) of multimedia signals received on mobile devices is maximised while mitigating the fast fading and multi-path effects of mobile channels. To support the transmission of high data-rate multimedia applications, a well known multi-carrier transmission technology known as Orthogonal Frequency Division Multiplexing (OFDM) has been implemented. As shown in this study, this results in significant performance gains when combined with other signal-processing techniques such as spa ce-time block coding (STBC). To optimise signal transmission, a novel unequal adaptive modulation scheme for the communication of multimedia data over MIMO-OFDM systems has been proposed. In this system, discrete wavelet transform/subband coding is used to compress data into their respective low-frequency and high-frequency components. Unlike traditional methods, however, data representing the low-frequency data are processed and modulated separately as they are more sensitive to the distortion effects of mobile radio channels. To make use of a desirable subchannel state, such that the quality (SNR) of the multimedia data recovered at the receiver is optimized, we employ a lookup matrix-adaptive bit and power allocation (LM-ABPA) algorithm. Apart from improving the spectral efficiency of OFDM, the modified LM-ABPA scheme, sorts and allocates subcarriers with the highest SNR to low-frequency data and the remaining to the least important data. To maintain a target system SNR, the LM-ABPA loading scheme assigns appropriate signal constella tion sizes and transmit power levels (modulation type) across all subcarriers and is adapted to the varying channel conditions such that the average system error-rate (SER/BER) is minimised. When configured for a constant data-rate load, simulation results show significant performance gains over non-adaptive systems. In addition to the above studies, the simulation framework developed in this work is applied to investigate the performance of other signal processing techniques for multimedia communication such as blind channel equalization, and to examine the effectiveness of a secure communication system based on a logistic chaotic generator (LCG) for chaos shift-keying (CSK)

    Conjoint probabilistic subband modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 125-133).by Ashok Chhabedia Popat.Ph.D

    Studies and simulations of the DigiCipher system

    Get PDF
    During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix
    • …
    corecore