278 research outputs found

    Data compression in remote sensing applications

    Get PDF
    A survey of current data compression techniques which are being used to reduce the amount of data in remote sensing applications is provided. The survey aspect is far from complete, reflecting the substantial activity in this area. The purpose of the survey is more to exemplify the different approaches being taken rather than to provide an exhaustive list of the various proposed approaches

    VHDL design and simulation for embedded zerotree wavelet quantisation

    Get PDF
    This thesis discusses a highly effective still image compression algorithm – The Embedded Zerotree Wavelets coding technique, as it is called. This technique is simple but achieves a remarkable result. The image is wavelet-transformed, symbolically coded and successive quantised, therefore the compression and transmission/storage saving can be achieved by utilising the structure of zerotree. The algorithm was first proposed by Jerome M. Shapiro in 1993, however to minimise the memory usage and speeding up the EZW processor, a Depth First Search method is used to transverse across the image rather than Breadth First Search method as initially discussed in Shapiro\u27s paper (Shapiro, 1993). The project\u27s primary objective is to simulate the EZW algorithm from a basic building block of 8 by 8 matrix to a well-known reference image such Lenna of 256 by 256 matrix. Hence the algorithm performance can be measured, for instance its peak signal to noise ratio can be calculated. The software environment used for the simulation is a Very-High Speed Integrated Circuits - Hardware Description Language such Peak VHDL, PC based version. This will lead to the second phase of the project. The secondary objective is to test the algorithm at a hardware level, such FPGA for a rapid prototype implementation only if the project time permits

    Map online system using internet-based image catalogue

    Get PDF
    Digital maps carry along its geodata information such as coordinate that is important in one particular topographic and thematic map. These geodatas are meaningful especially in military field. Since the maps carry along this information, its makes the size of the images is too big. The bigger size, the bigger storage is required to allocate the image file. It also can cause longer loading time. These conditions make it did not suitable to be applied in image catalogue approach via internet environment. With compression techniques, the image size can be reduced and the quality of the image is still guaranteed without much changes. This report is paying attention to one of the image compression technique using wavelet technology. Wavelet technology is much batter than any other image compression technique nowadays. As a result, the compressed images applied to a system called Map Online that used Internet-based Image Catalogue approach. This system allowed user to buy map online. User also can download the maps that had been bought besides using the searching the map. Map searching is based on several meaningful keywords. As a result, this system is expected to be used by Jabatan Ukur dan Pemetaan Malaysia (JUPEM) in order to make the organization vision is implemented

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    A Reference-Free Lossless Compression Algorithm for DNA Sequences Using a Competitive Prediction of Two Classes of Weighted Models

    Get PDF
    The development of efficient data compressors for DNA sequences is crucial not only for reducing the storage and the bandwidth for transmission, but also for analysis purposes. In particular, the development of improved compression models directly influences the outcome of anthropological and biomedical compression-based methods. In this paper, we describe a new lossless compressor with improved compression capabilities for DNA sequences representing different domains and kingdoms. The reference-free method uses a competitive prediction model to estimate, for each symbol, the best class of models to be used before applying arithmetic encoding. There are two classes of models: weighted context models (including substitutional tolerant context models) and weighted stochastic repeat models. Both classes of models use specific sub-programs to handle inverted repeats efficiently. The results show that the proposed method attains a higher compression ratio than state-of-the-art approaches, on a balanced and diverse benchmark, using a competitive level of computational resources. An efficient implementation of the method is publicly available, under the GPLv3 license.Peer reviewe

    Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs

    Full text link
    Let P={p(i)}P = \{p(i)\} be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial PP for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective functions, β\beta-exponential means, those of the form logaip(i)an(i)\log_a \sum_i p(i) a^{n(i)}, where n(i)n(i) is the length of the iith codeword and aa is a positive constant. Applications of such minimizations include a novel problem of maximizing the chance of message receipt in single-shot communications (a<1a<1) and a previously known problem of minimizing the chance of buffer overflow in a queueing system (a>1a>1). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions and both are extended to alphabetic codes, as well as to minimizing maximum pointwise redundancy. The aforementioned application of minimizing the chance of buffer overflow is also considered.Comment: 14 pages, 6 figures, accepted to IEEE Trans. Inform. Theor

    Slowing and Loss of Complexity in Alzheimer's EEG: Two Sides of the Same Coin?

    Get PDF
    Medical studies have shown that EEG of Alzheimer's disease (AD) patients is “slower” (i.e., contains more low-frequency power) and is less complex compared to age-matched healthy subjects. The relation between those two phenomena has not yet been studied, and they are often silently assumed to be independent. In this paper, it is shown that both phenomena are strongly related. Strong correlation between slowing and loss of complexity is observed in two independent EEG datasets: (1) EEG of predementia patients (a.k.a. Mild Cognitive Impairment; MCI) and control subjects; (2) EEG of mild AD patients and control subjects. The two data sets are from different patients, different hospitals and obtained through different recording systems. The paper also investigates the potential of EEG slowing and loss of EEG complexity as indicators of AD onset. In particular, relative power and complexity measures are used as features to classify the MCI and MiAD patients versus age-matched control subjects. When combined with two synchrony measures (Granger causality and stochastic event synchrony), classification rates of 83% (MCI) and 98% (MiAD) are obtained. By including the compression ratios as features, slightly better classification rates are obtained than with relative power and synchrony measures alone

    Adaptive scalar quantization without side information

    Get PDF
    In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applica- tions, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics. Our quantization scheme is thus backward adaptive. We propose that an adaptive quantizer can be separated into two building blocks, namely, model estimation and quantizer design. The model estimation produces an estimate of the changing source probability density function, which is then used to redesign the quantizer using standard techniques. We introduce non- parametric estimation techniques that only assume smoothness of the input distribution. We discuss the various sources of error in our estimation and argue that, for a wide class of sources with a smooth probability density function (pdf), we provide a good approximation to a “universal” quantizer, with the approximation becoming better as the rate increases. We study the performance of our scheme and show how the loss due to adaptivity is minimal in typical scenarios. In particular, we provide examples and show how our technique can achieve signal- to-noise ratios (SNR’s) within 0.05 dB of the optimal Lloyd–Max quantizer (LMQ) for a memoryless source, while achieving over 1.5 dB gain over a fixed quantizer for a bimodal source

    Geometric Prior Based Deep Human Point Cloud Geometry Compression

    Full text link
    The emergence of digital avatars has raised an exponential increase in the demand for human point clouds with realistic and intricate details. The compression of such data becomes challenging with overwhelming data amounts comprising millions of points. Herein, we leverage the human geometric prior in geometry redundancy removal of point clouds, greatly promoting the compression performance. More specifically, the prior provides topological constraints as geometry initialization, allowing adaptive adjustments with a compact parameter set that could be represented with only a few bits. Therefore, we can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations. The priors could first be derived with an aligned point cloud, and subsequently the difference of features is compressed into a compact latent code. The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods. Extensive experimental results show that our approach significantly improves the compression performance without deteriorating the quality, demonstrating its promise in a variety of applications
    corecore