11,368 research outputs found
STUDI BANDING ALGORITMA KOMPRESI HUFFMAN CODING DAN ADAPTIVE HUFFMAN CODING
ABSTRAK
Algoritma kompresi menurut David Solomon (2007:2) adalah proses mengkonversikan sebuah
input data stream (streamsumber, atau data mentah asli) menjadi data stream lainnya (bitstream hasil,
atau stream yang telah terkompresi) yang berukuran lebih kecil. Berbagai tipe algoritma kompresi,
antara lain: Huffman, LIFO, LZHUF, LZ77 dan variannya ( LZ78, LZW, GZIP), Dynamic Markov
Compression (DMC), Block-SortingLossLess, Run-Length, Shannon-Fano, Arithmetic, PPM
(Prediction by PartialMatching), Burrows-Wheeler, Block Sorting, dan Half Byte.
Huffman Coding dan Adaptive Huffman Coding adalah salah satu tipe algoritma kompresi
yang menjadi pokok bahasan dalam tugas akhir ini. Huffman Coding adalah sebuah tipe kode optimal
yang biasanya digunakan untuk lossless data compression. Huffman coding ditemukan oleh David A.
Huffman pada saat ia masih seorang mahasiswa di MIT, ia menerbitkan karyanya ditahun 1952 yang
berjudul “A Method for the Contruction of Minimum Redudancy Codes”. Adaptive Huffman Coding
adalah teknik pengkodean adaptif berdasarkan pengkodean Huffman. Adaptif mempunyai
implementasi antara lain algoritma FGK dan algoritma Vitter.
Hasil dari studi banding mengenai tugas akhir ini adalah mengenai apa saja yang menjadi
keunggulan Huffman coding dan Adaptive Huffman coding.
Kata kunci : algoritma kompresi, tipe algoritma kompresi, Huffman Coding, Adaptive Huffman
Codin
Dynamic Shannon Coding
We present a new algorithm for dynamic prefix-free coding, based on Shannon
coding. We give a simple analysis and prove a better upper bound on the length
of the encoding produced than the corresponding bound for dynamic Huffman
coding. We show how our algorithm can be modified for efficient
length-restricted coding, alphabetic coding and coding with unequal letter
costs.Comment: 6 pages; conference version presented at ESA 2004; journal version
submitted to IEEE Transactions on Information Theor
Perbandingan Metode Lz77, Metode Huffman Dan Metode Deflate Terhadap Kompresi Data Teks
Data compression is a very important process in the world that has been vastly using digital files, such as for texts, images, sounds or videos. Those digital files has a varied size and often taking disk storage spaces. To overcome this problem, many experts created compression algorithms, both for lossy and lossless compression. This research discusses about testing of four lossless compression algorithms that applied for text files, such as LZ77, Static Huffman, LZ77 combined with Static Huffman, and Deflate. Performance comparison of the four algorithms is measured by obtaining the compression ratio. From the test results can be concluded that the Deflate algorithm is the best algorithm due to the use of multiple modes, i.e. uncompressed mode, LZ77 combined with Static Huffman mode, and LZ77 combined with Dynamic Huffman Coding mode. The results also showed that the Deflate algorithm can compress text files and generates an average compression ratio of 38.84%
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
- …