854 research outputs found

    Duncode Characters Shorter

    Full text link
    This paper investigates the employment of various encoders in text transformation, converting characters into bytes. It discusses local encoders such as ASCII and GB-2312, which encode specific characters into shorter bytes, and universal encoders like UTF-8 and UTF-16, which can encode the complete Unicode set with greater space requirements and are gaining widespread acceptance. Other encoders, including SCSU, BOCU-1, and binary encoders, however, lack self-synchronizing capabilities. Duncode is introduced as an innovative encoding method that aims to encode the entire Unicode character set with high space efficiency, akin to local encoders. It has the potential to compress multiple characters of a string into a Duncode unit using fewer bytes. Despite offering less self-synchronizing identification information, Duncode surpasses UTF8 in terms of space efficiency. The application is available at \url{https://github.com/laohur/duncode}. Additionally, we have developed a benchmark for evaluating character encoders across different languages. It encompasses 179 languages and can be accessed at \url{https://github.com/laohur/wiki2txt}

    Content-aware partial compression for textual big data analysis in Hadoop

    Get PDF
    A substantial amount of information in companies and on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. Compression as an effective means to reduce data size has been employed by many emerging data analytic platforms, whom the main purpose of data compression is to save storage space and reduce data transmission cost over the network. Since general purpose compression methods endeavour to achieve higher compression ratios by leveraging data transformation techniques and contextual data, this context-dependency forces the access to the compressed data to be sequential. Processing such compressed data in parallel, such as desirable in a distributed environment, is extremely challenging. This work proposes techniques for more efficient textual big data analysis with an emphasis on content-aware compression schemes suitable for the Hadoop analytic platform. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of public and private real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements

    Text Augmentation: Inserting markup into natural language text with PPM Models

    Get PDF
    This thesis describes a new optimisation and new heuristics for automatically marking up XML documents. These are implemented in CEM, using PPMmodels. CEM is significantly more general than previous systems, marking up large numbers of hierarchical tags, using n-gram models for large n and a variety of escape methods. Four corpora are discussed, including the bibliography corpus of 14682 bibliographies laid out in seven standard styles using the BIBTEX system and markedup in XML with every field from the original BIBTEX. Other corpora include the ROCLING Chinese text segmentation corpus, the Computists’ Communique corpus and the Reuters’ corpus. A detailed examination is presented of the methods of evaluating mark up algorithms, including computation complexity measures and correctness measures from the fields of information retrieval, string processing, machine learning and information theory. A new taxonomy of markup complexities is established and the properties of each taxon are examined in relation to the complexity of marked-up documents. The performance of the new heuristics and optimisation is examined using the four corpora

    Encryption by using base-n systems with many characters

    Full text link
    It is possible to interpret text as numbers (and vice versa) if one interpret letters and other characters as digits and assume that they have an inherent immutable ordering. This is demonstrated by the conventional digit set of the hexadecimal system of number coding, where the letters ABCDEF in this exact alphabetic sequence stand each for a digit and thus a numerical value. In this article, we consequently elaborate this thought and include all symbols and the standard ordering of the unicode standard for digital character coding. We show how this can be used to form digit sets of different sizes and how subsequent simple conversion between bases can result in encryption mimicking results of wrong encoding and accidental noise. Unfortunately, because of encoding peculiarities, switching bases to a higher one does not necessarily result in efficient disk space compression automatically.Comment: 12 pages, 6 figure

    Hybrid information security system via combination of compression, cryptography, and image steganography

    Get PDF
    Today, the world is experiencing a new paradigm characterized by dynamism and rapid change due to revolutions that have gone through information and digital communication technologies, this raised many security and capacity concerns about information security transmitted via the Internet network. Cryptography and steganography are two of the most extensively that are used to ensure information security. Those techniques alone are not suitable for high security of information, so in this paper, we proposed a new system was proposed of hiding information within the image to optimize security and capacity. This system provides a sequence of steps by compressing the secret image using discrete wavelet transform (DWT) algorithm, then using the advanced encryption standard (AES) algorithm for encryption compressed data. The least significant bit (LSB) technique has been applied to hide the encrypted data. The results show that the proposed system is able to optimize the stego-image quality (PSNR value of 47.8 dB) and structural similarity index (SSIM value of 0.92). In addition, the results of the experiment proved that the combination of techniques maintains stego-image quality by 68%, improves system performance by 44%, and increases the size of secret data compared to using each technique alone. This study may contribute to solving the problem of the security and capacity of information when sent over the internet
    corecore