4,162 research outputs found

    Encoding of probability distributions for Asymmetric Numeral Systems

    Full text link
    Many data compressors regularly encode probability distributions for entropy coding - requiring minimal description length type of optimizations. Canonical prefix/Huffman coding usually just writes lengths of bit sequences, this way approximating probabilities with powers-of-2. Operating on more accurate probabilities usually allows for better compression ratios, and is possible e.g. using arithmetic coding and Asymmetric Numeral Systems family. Especially the multiplication-free tabled variant of the latter (tANS) builds automaton often replacing Huffman coding due to better compression at similar computational cost - e.g. in popular Facebook Zstandard and Apple LZFSE compressors. There is discussed encoding of probability distributions for such applications, especially using Pyramid Vector Quantizer(PVQ)-based approach with deformation, also tuned symbol spread for tANS.Comment: 5 pages, 4 figure

    Video data compression using artificial neural network differential vector quantization

    Get PDF
    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes

    Analisis Algoritma Huffman untuk Kompresi Citra Digital Berbasis Contourlet Transform

    Get PDF
    ABSTRAKSI: Proses pengiriman dan penyimpanan data citra digital dipengaruhi oleh bandwidth dan kapasitas. Semakin besar ukuran suatu data yang akan ditransmisikan semakin besar bandwidth yang dibutuhkan agar waktu pengiriman yang digunakan semakin singkat. Dalam rangka efisiensi bandwith dan kapasitas penyimpanan digunakanlah proses kompresi citra digital dengan tujuan meminimalikan jumlah bit citra.Pada tugas akhir ini digunakan teknik kompresi citra dengan transformasi contourlet , kuantisasi vektor dan Huffman coding (algoritma Huffman). Transformasi contourlet merupakan transformasi yang digunakan untuk mendekomposisi citra digital menjadi beberapa subband. Subband tersebut didapatkan dari proses Laplacian Pyramid (LP) dan juga Directional Filter Banks (DFB). Gabungan dua proses tersebut dinamakan transformasi contourlet (discrete contourlet transform). Kuantisasi vektor merupakan proses kuantisasi yang dilakukan terhadap data input dengan cara membagi sekumpulan data kedalam vektor-vektor masukan. Dari vektor-vektor masukan tersebut, dibentuk codevector (codeword). Kumpulan dari codevector ini akan menghasilkan codebook. Codebook inilah yang digunakan sebagai pengkuantisasi dan pendekuatisasi. Huffman coding merupakan algoritma untuk mengkompresi data berdasarkan statistik datanya dan bersifat loseless. Penggabungan dari metode-metode ini menghasilkan sistem kompresi yang bersifat lossy. PSNR dan rasio kompresi dihitung untuk mengetahui peformansi sistem kompresi.Berdasarkan hasil pengujian, sistem kompresi menggunakan Huffman coding berbasis transformasi contourlet memiliki performansi yang kurang bagus pada sisi rasio kompresi jika dibandingkan dengan JPEG yang menghasilkan rata-rata PSNR yaitu 30,23 dB dan rata-rata rasio kompresi yaitu 92,75% sedangkan sistem kompresi menggunakan Huffman coding berbasis transformasi contourlet menghasilkan rata-rata PSNR yaitu 33,50 dB dan rata-rata rasio kompresi 60,63% .Kata Kunci : Huffman coding, Contourlet Transform, Kuantisasi Vektor, JPEG, Laplacian Pyramid, Directional Filter BanksABSTRACT: Transmission process and data storage (digital image) is influenced by the bandwidth and capacity. The larger of the size of the data to be transmitted the greater bandwidth needed for transmission time getting shorter. For bandwidth and storage capacity effeciency, digital image compression process was performed in order to minimize number of bits of the imageIn this final projcet, image compression technique are used, such as: contourlet transform, vector quantization and huffman coding (huffman‟s algorithm). Contourlet transformation is a transformation that used to decompose the digital image into multiple subband. Then, subband is obtained from Laplacian Pyramid and Directional Filter Banks as well. Combined the two processes is called contourlet transform (discrete contourlet transform). Vector quantization is a quantization process is performed on the input data by dividing the data ata into a set of input vectors. From these input vectors, formed codevector (codeword). The collection of this codevector will generate a codebook. Codebook is used as quantizer and dequantizer. Huffman coding is an algorithm to compress data based on statistical data and is loseless. The incorporasion of these methods produces a lossy compression system. PSNR and compression rasio is calculated to determine peformance of compression system.Based on the results of testing, compression system using huffman coding that based contourlet transform has a less good performance on the compression rasio when compared with JPEG, which produces an average PSNR of 30.23 dB and the average compression rasio of 92.75% while the compression system huffman coding using the contourlet transform yield based on average PSNR is 33.50 dB and the average compression rasio of 60.63%.Keyword: Huffman coding, Contourlet Transform, Vector Quantization, JPEG, Laplacian Pyramid, Directional Filter Bank
    corecore