15,355 research outputs found

    Efficient SAR Raw Data Compression in Frequency Domain

    Get PDF
    SAR raw data compression is necessary to reduce huge amounts of SAR data for a memory on board a satellite, space shuttle or aircraft and for later downlink to a ground station. In view of interferometric and polarimetric applications for SAR data, it becomes more and more important to pay attention to phase errors caused by data compression. Herein, a detailed comparison of block adaptive quantization in time domain (BAQ) and in frequency domain (FFT-BAQ) is given. Inclusion of raw data compression in the processing chain allows an efficient use of the FFT-BAQ and makes implementation for on-board data compression feasible. The FFT-BAQ outperforms the BAQ in terms of signal-to-quantization noise ratio and phase error and allows a direct decimation of the oversampled data equivalent to FIR-filtering in time domain. Impacts on interferometric phase and coherency are also given

    Video data compression using artificial neural network differential vector quantization

    Get PDF
    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes

    Hashing for Similarity Search: A Survey

    Full text link
    Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space

    Information preserved guided scan pixel difference coding for medical images

    Full text link
    This paper analyzes the information content of medical images, with 3-D MRI images as an example, in terms of information entropy. The results of the analysis justify the use of Pixel Difference Coding for preserving all information contained in the original pictures, lossless coding in other words. The experimental results also indicate that the compression ratio CR=2:1 can be achieved under the lossless constraints. A pratical implementation of Pixel Difference Coding which allows interactive retrieval of local ROI (Region of Interest), while maintaining the near low bound information entropy, is discussed.Comment: 5 pages and 5 figures. Published in IEEE Wescanex proceeding
    corecore