118 research outputs found

    Transform Based And Search Aware Text Compression Schemes And Compressed Domain Text Retrieval

    Get PDF
    In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm\u27s ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors

    Building Digital Libraries: Data Capture

    Get PDF

    A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes

    Get PDF
    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCITT Group 3 Facsimile 1-dimensional modified Huffman run length code. In a set of 25 images consisting of full microscopic fields of view of bacterial slides, the method gave a 10.3-fold compression: 1.074 times better than LZW. In a second set of images of single areas of interest within each field of view, compression ratios of over 600 were obtained, 12.8 times that of LZW. The drawback of the system is its bad worst case performance. The method could be used in any application requiring storage of binary images of relatively small objects with fairly large spaces in between

    The 1993 Space and Earth Science Data Compression Workshop

    Get PDF
    The Earth Observing System Data and Information System (EOSDIS) is described in terms of its data volume, data rate, and data distribution requirements. Opportunities for data compression in EOSDIS are discussed

    The 1995 Science Information Management and Data Compression Workshop

    Get PDF
    This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center

    Text compression for Chinese documents.

    Get PDF
    by Chi-kwun Kan.Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 133-137).Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Importance of Text Compression --- p.1Chapter 1.2 --- Historical Background of Data Compression --- p.2Chapter 1.3 --- The Essences of Data Compression --- p.4Chapter 1.4 --- Motivation and Objectives of the Project --- p.5Chapter 1.5 --- Definition of Important Terms --- p.6Chapter 1.5.1 --- Data Models --- p.6Chapter 1.5.2 --- Entropy --- p.10Chapter 1.5.3 --- Statistical and Dictionary-based Compression --- p.12Chapter 1.5.4 --- Static and Adaptive Modelling --- p.12Chapter 1.5.5 --- One-Pass and Two-Pass Modelling --- p.13Chapter 1.6 --- Benchmarks and Measurements of Results --- p.15Chapter 1.7 --- Sources of Testing Data --- p.16Chapter 1.8 --- Outline of the Thesis --- p.16Chapter 2 --- Literature Survey --- p.18Chapter 2.1 --- Data compression Algorithms --- p.18Chapter 2.1.1 --- Statistical Compression Methods --- p.18Chapter 2.1.2 --- Dictionary-based Compression Methods (Ziv-Lempel Fam- ily) --- p.23Chapter 2.2 --- Cascading of Algorithms --- p.33Chapter 2.3 --- Problems of Current Compression Programs on Chinese --- p.34Chapter 2.4 --- Previous Chinese Data Compression Literatures --- p.37Chapter 3 --- Chinese-related Issues --- p.38Chapter 3.1 --- Characteristics in Chinese Data Compression --- p.38Chapter 3.1.1 --- Large and Not Fixed Size Character Set --- p.38Chapter 3.1.2 --- Lack of Word Segmentation --- p.40Chapter 3.1.3 --- Rich Semantic Meaning of Chinese Characters --- p.40Chapter 3.1.4 --- Grammatical Variance of Chinese Language --- p.41Chapter 3.2 --- Definition of Different Coding Schemes --- p.41Chapter 3.2.1 --- Big5 Code --- p.42Chapter 3.2.2 --- GB (Guo Biao) Code --- p.43Chapter 3.2.3 --- Unicode --- p.44Chapter 3.2.4 --- HZ (Hanzi) Code --- p.45Chapter 3.3 --- Entropy of Chinese and Other Languages --- p.45Chapter 4 --- Huffman Coding on Chinese Text --- p.49Chapter 4.1 --- The use of the Chinese Character Identification Routine --- p.50Chapter 4.2 --- Result --- p.51Chapter 4.3 --- Justification of the Result --- p.53Chapter 4.4 --- Time and Memory Resources Analysis --- p.58Chapter 4.5 --- The Heuristic Order-n Huffman Coding for Chinese Text Com- pression --- p.61Chapter 4.5.1 --- The Algorithm --- p.62Chapter 4.5.2 --- Result --- p.63Chapter 4.5.3 --- Justification of the Result --- p.64Chapter 4.6 --- Chapter Conclusion --- p.66Chapter 5 --- The Ziv-Lempel Compression on Chinese Text --- p.67Chapter 5.1 --- The Chinese LZSS Compression --- p.68Chapter 5.1.1 --- The Algorithm --- p.69Chapter 5.1.2 --- Result --- p.73Chapter 5.1.3 --- Justification of the Result --- p.74Chapter 5.1.4 --- Time and Memory Resources Analysis --- p.80Chapter 5.1.5 --- Effects in Controlling the Parameters --- p.81Chapter 5.2 --- The Chinese LZW Compression --- p.92Chapter 5.2.1 --- The Algorithm --- p.92Chapter 5.2.2 --- Result --- p.94Chapter 5.2.3 --- Justification of the Result --- p.95Chapter 5.2.4 --- Time and Memory Resources Analysis --- p.97Chapter 5.2.5 --- Effects in Controlling the Parameters --- p.98Chapter 5.3 --- A Comparison of the performance of the LZSS and the LZW --- p.100Chapter 5.4 --- Chapter Conclusion --- p.101Chapter 6 --- Chinese Dictionary-based Huffman coding --- p.103Chapter 6.1 --- The Algorithm --- p.104Chapter 6.2 --- Result --- p.107Chapter 6.3 --- Justification of the Result --- p.108Chapter 6.4 --- Effects of Changing the Size of the Dictionary --- p.111Chapter 6.5 --- Chapter Conclusion --- p.114Chapter 7 --- Cascading of Huffman coding and LZW compression --- p.116Chapter 7.1 --- Static Cascading Model --- p.117Chapter 7.1.1 --- The Algorithm --- p.117Chapter 7.1.2 --- Result --- p.120Chapter 7.1.3 --- Explanation and Analysis of the Result --- p.121Chapter 7.2 --- Adaptive (Dynamic) Cascading Model --- p.125Chapter 7.2.1 --- The Algorithm --- p.125Chapter 7.2.2 --- Result --- p.126Chapter 7.2.3 --- Explanation and Analysis of the Result --- p.127Chapter 7.3 --- Chapter Conclusion --- p.128Chapter 8 --- Concluding Remarks --- p.129Chapter 8.1 --- Conclusion --- p.129Chapter 8.2 --- Future Work Direction --- p.130Chapter 8.2.1 --- Improvement in Efficiency and Resources Consumption --- p.130Chapter 8.2.2 --- The Compressibility of Chinese and Other Languages --- p.131Chapter 8.2.3 --- Use of Grammar Model --- p.131Chapter 8.2.4 --- Lossy Compression --- p.131Chapter 8.3 --- Epilogue --- p.132Bibliography --- p.13

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    An investigation of music analysis by the application of grammar-based compressors

    Get PDF
    Many studies have presented computational models of musical structure, as an important aspect of musicological analysis. However, the use of grammar-based compressors to automatically recover such information is a relatively new and promising technique. We investigate their performance extensively using a collection of nearly 8000 scores, on tasks including error detection, classification, and segmentation, and compare this with a range of more traditional compressors. Further, we detail a novel method for locating transcription errors based on grammar compression. Despite its lack of domain knowledge, we conclude that grammar-based compression offers competitive performance when solving a variety of musicological tasks
    • …
    corecore