42 research outputs found

    Transform Based And Search Aware Text Compression Schemes And Compressed Domain Text Retrieval

    Get PDF
    In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm\u27s ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors

    A survey of computer uses in music

    Full text link
    This thesis covers research into the mathematical basis inherent in music including review of projects related to optical character recognition (OCR) of musical symbols. Research was done about fractals creating new pieces by assigning pitches to numbers. Existing musical pieces can be taken apart and reassembled creating new ideas for composers. Musical notation understanding is covered and its requirement for the recognition of a music sheet by the computer for editing and reproduction purposes is explained. The first phase of a musical OCR was created in this thesis with the recognition of staff lines on a good quality image. Modifications will need to be made to take care of noise and tilted images that may result from scanning

    Algoritms for image analysis of neural tissue for the evaluation of HIV therapies

    Get PDF
    Gli effetti della progressiva degenerazione del tessuto neurale causata dal virus dell’ HIV può essere simulata su culture di neuroni somministrando il composto virale neuro-tossico 3NP. É possibile quantificare la capacità di composti terapeutici a base di Paroxetina di ridurre gli effetti di degenerazione e morte del tessuto neurale per mezzo della valutazione dello stato di salute di culture di neuroni, analizzando immagini delle culture ottenute per mezzo di un microscopio a fluorescenza. L’obbiettivo di questo progetto é di analizzare questo tipo di immagini, con lo scopo di quantificarne tre principali caratteristiche: l’area totale dei neuron cell bodies la lunghezza totale dei neurite branches e il numero totale di small beads. Tramite l’implementazione di un programma di analisi di immagine e da una valutazione di tali risultati, è stato possibile confermare che i composti di Paroxetina usati hanno un effetto protettivo asso-dendritico sulle culture di neuron

    Gbit/second lossless data compression hardware

    Get PDF
    This thesis investigates how to improve the performance of lossless data compression hardware as a tool to reduce the cost per bit stored in a computer system or transmitted over a communication network. Lossless data compression allows the exact reconstruction of the original data after decompression. Its deployment in some high-bandwidth applications has been hampered due to performance limitations in the compressing hardware that needs to match the performance of the original system to avoid becoming a bottleneck. Advancing the area of lossless data compression hardware, hence, offers a valid motivation with the potential of doubling the performance of the system that incorporates it with minimum investment. This work starts by presenting an analysis of current compression methods with the objective of identifying the factors that limit performance and also the factors that increase it. [Continues.

    Memory accesss management in high performance computers

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadoras y Automática, leída el 12-04-2013Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Exploiting similarities between secret and cover images for improved embedding efficiency and security in digital steganography

    Get PDF
    The rapid advancements in digital communication technology and huge increase in computer power have generated an exponential growth in the use of the Internet for various commercial, governmental and social interactions that involve transmission of a variety of complex data and multimedia objects. Securing the content of sensitive as well as personal transactions over open networks while ensuring the privacy of information has become essential but increasingly challenging. Therefore, information and multimedia security research area attracts more and more interest, and its scope of applications expands significantly. Communication security mechanisms have been investigated and developed to protect information privacy with Encryption and Steganography providing the two most obvious solutions. Encrypting a secret message transforms it to a noise-like data which is observable but meaningless, while Steganography conceals the very existence of secret information by hiding in mundane communication that does not attract unwelcome snooping. Digital steganography is concerned with using images, videos and audio signals as cover objects for hiding secret bit-streams. Suitability of media files for such purposes is due to the high degree of redundancy as well as being the most widely exchanged digital data. Over the last two decades, there has been a plethora of research that aim to develop new hiding schemes to overcome the variety of challenges relating to imperceptibility of the hidden secrets, payload capacity, efficiency of embedding and robustness against steganalysis attacks. Most existing techniques treat secrets as random bit-streams even when dealing with non-random signals such as images that may add to the toughness of the challenges.This thesis is devoted to investigate and develop steganography schemes for embedding secret images in image files. While many existing schemes have been developed to perform well with respect to one or more of the above objectives, we aim to achieve optimal performance in terms of all these objectives. We shall only be concerned with embedding secret images in the spatial domain of cover images. The main difficulty in addressing the different challenges stems from the fact that the act of embedding results in changing cover image pixel values that cannot be avoided, although these changes may not be easy to detect by the human eye. These pixel changes is a consequence of dissimilarity between the cover LSB plane and the secretimage bit-stream, and result in changes to the statistical parameters of stego-image bit-planes as well as to local image features. Steganalysis tools exploit these effects to model targeted as well as blind attacks. These challenges are usually dealt with by randomising the changes to the LSB, using different/multiple bit-planes to embed one or more secret bits using elaborate schemes, or embedding in certain regions that are noise-tolerant. Our innovative approach to deal with these challenges is first to develop some image procedures and models that result in increasing similarity between the cover image LSB plane and the secret image bit-stream. This will be achieved in two novel steps involving manipulation of both the secret image and the cover image, prior to embedding, that result a higher 0:1 ratio in both the secret bit-stream and the cover pixels‘ LSB plane. For the secret images, we exploit the fact that image pixel values are in general neither uniformly distributed, as is the case of random secrets, nor spatially stationary. We shall develop three secret image pre-processing algorithms to transform the secret image bit-stream for increased 0:1 ratio. Two of these are similar, but one in the spatial domain and the other in the Wavelet domain. In both cases, the most frequent pixels are mapped onto bytes with more 0s. The third method, process blocks by subtracting their means from their pixel values and hence reducing the require number of bits to represent these blocks. In other words, this third algorithm also reduces the length of the secret image bit-stream without loss of information. We shall demonstrate that these algorithms yield a significant increase in the secret image bit-stream 0:1 ratio, the one that based on the Wavelet domain is the best-performing with 80% ratio.For the cover images, we exploit the fact that pixel value decomposition schemes, based on Fibonacci or other defining sequences that differ from the usual binary scheme, expand the number of bit-planes and thereby may help increase the 0:1 ratio in cover image LSB plane. We investigate some such existing techniques and demonstrate that these schemes indeed lead to increased 0:1 ratio in the corresponding cover image LSB plane. We also develop a new extension of the binary decomposition scheme that is the best-performing one with 77% ratio. We exploit the above two steps strategy to propose a bit-plane(s) mapping embedding technique, instead of bit-plane(s) replacement to make each cover pixel usable for secret embedding. This is motivated by the observation that non-binary pixel decomposition schemes also result in decreasing the number of possible patterns for the three first bit-planes to 4 or 5 instead of 8. We shall demonstrate that the combination of the mapping-based embedding scheme and the two steps strategy produces stego-images that have minimal distortion, i.e. reducing the number of the cover pixels changes after message embedding and increasing embedding efficiency. We shall also demonstrate that these schemes result in reasonable stego-image quality and are robust against all the targeted steganalysis tools but not against the blind SRM tool. We shall finally identify possible future work to achieve robustness against SRM at some payload rates and further improve stego-image quality
    corecore