72,570 research outputs found

    Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Get PDF
    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home)

    Speech Compression by using Adaptive Differential Pulse Code Modulation (ADPCM) technique With Microcontroller

    Get PDF
    Compression is a process of reducing an input data (Speech Signal) bit stream into a small bit size with high quality. Analog signal is a continuous signal takes more space to store the data in memory devices with original size (Bit). All sensor data (Analog Data) stored in computer with original size (Bit), but because of compression technique we store the same data in reduced format we same quality. In compression the unwanted data is eliminate. The main purpose of speech compression is to reduce the data bits for transmission of original data from one place to other & store this data that maintaining the quality as same as original signal. In this compression technique the analog to digital conversion (ADC) process played important role, because of analog to digital conversion analog to digital conversion (ADC) we get quantized sample signal. In that sample signal high correlation property is present between the sampled speech signal. The Adaptive Delta Pulse Code Modulation (ADPCM) techniques use the high correlation property of sampled data for compression of signal. This algorithm cannot compress the sampled data as it. It takes the difference between the predicted sample signal and actual sample signal then encode this difference signal which is explained in details below. The Adaptive Delta Pulse Code Modulation (ADPCM) methods have very efficient methods for the compression of signal by reduction of number of bits per sample from original signal with maintaining the quality of signal.There are so many data compression technique available, but some technique algorithm operation not gives actual quality of signal after compression. That type of technique is called as lossy type algorithm. Because this type lossy algorithm the human ear cannot detect the word. Human voice frequency ranges from 300 Hz to 3400 Hz. Adaptive Delta Pulse Code Modulation (ADPCM) is a well known encoding scheme used for speech processing. This project focuses on the simplification in the technique so that the hardware complexity can be reduced for the portable speech compression & decompression devices.In this project we used ARM controller which is heart of this project that contains 10-bit channel analog to digital conversion (ADC) pin. Means we get the sample upto 1024. this sample we have going uses for the Adaptive Delta Pulse Code Modulation (ADPCM) algorithm. Also in ARM controller Digital to Analog coversion (DAC) pin has available to check the compressed signal with original signal. Because of ARM controller we reduce the circuitry. Also, we use the Digital signal oscilloscope (DSO) & personal computer to check the behavioural of signal.Because of compression we save the memory & transmission time with same quality. Also when we want this data we check from stored from memory. In so many government offices, private colleges, laboratory & industry requires to store the original data in computer or in memory devices as it is, so many memory devices are required hence wastage of money is take place. Because of compression by using adaptive differential pulse code modulation technique with microcontroller we achieve the compression of signal with same quality

    New Algorithms and Lower Bounds for Sequential-Access Data Compression

    Get PDF
    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.Comment: draft of PhD thesi

    Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes

    Get PDF
    I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.Comment: 35 pages, 3 figures, based on KES 2008 keynote and ALT 2007 / DS 2007 joint invited lectur
    • …
    corecore