51,474 research outputs found

    3D oil reservoir visualisation using octree compression techniques utilising logical grid co-ordinates

    Get PDF
    Octree compression techniques have been used for several years for compressing large three dimensional data sets into homogeneous regions. This compression technique is ideally suited to datasets which have similar values in clusters. Oil engineers represent reservoirs as a three dimensional grid where hydrocarbons occur naturally in clusters. This research looks at the efficiency of storing these grids using octree compression techniques where grid cells are broken into active and inactive regions. Initial experiments yielded high compression ratios as only active leaf nodes and their ancestor, header nodes are stored as a bitstream to file on disk. Savings in computational time and memory were possible at decompression, as only active leaf nodes are sent to the graphics card eliminating the need of reconstructing the original matrix. This results in a more compact vertex table, which can be loaded into the graphics card quicker and generating shorter refresh delay times

    On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression

    Full text link
    Much research has been devoted to optimizing algorithms of the Lempel-Ziv (LZ) 77 family, both in terms of speed and memory requirements. Binary search trees and suffix trees (ST) are data structures that have been often used for this purpose, as they allow fast searches at the expense of memory usage. In recent years, there has been interest on suffix arrays (SA), due to their simplicity and low memory requirements. One key issue is that an SA can solve the sub-string problem almost as efficiently as an ST, using less memory. This paper proposes two new SA-based algorithms for LZ encoding, which require no modifications on the decoder side. Experimental results on standard benchmarks show that our algorithms, though not faster, use 3 to 5 times less memory than the ST counterparts. Another important feature of our SA-based algorithms is that the amount of memory is independent of the text to search, thus the memory that has to be allocated can be defined a priori. These features of low and predictable memory requirements are of the utmost importance in several scenarios, such as embedded systems, where memory is at a premium and speed is not critical. Finally, we point out that the new algorithms are general, in the sense that they are adequate for applications other than LZ compression, such as text retrieval and forward/backward sub-string search.Comment: 10 pages, submited to IEEE - Data Compression Conference 200

    New Algorithms and Lower Bounds for Sequential-Access Data Compression

    Get PDF
    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.Comment: draft of PhD thesi

    PDF/A-3u as an archival format for Accessible mathematics

    Full text link
    Including LaTeX source of mathematical expressions, within the PDF document of a text-book or research paper, has definite benefits regarding `Accessibility' considerations. Here we describe three ways in which this can be done, fully compatibly with international standards ISO 32000, ISO 19005-3, and the forthcoming ISO 32000-2 (PDF 2.0). Two methods use embedded files, also known as `attachments', holding information in either LaTeX or MathML formats, but use different PDF structures to relate these attachments to regions of the document window. One uses structure, so is applicable to a fully `Tagged PDF' context, while the other uses /AF tagging of the relevant content. The third method requires no tagging at all, instead including the source coding as the /ActualText replacement of a so-called `fake space'. Information provided this way is extracted via simple Select/Copy/Paste actions, and is available to existing screen-reading software and assistive technologies.Comment: This is a post-print version of original in volume: S.M. Watt et al. (Eds.): CICM 2014, LNAI 8543, pp.184-199, 2014; available at http://link.springer.com/search?query=LNAI+8543, along with supplementary PDF. This version, with supplement as attachment, is enriched to validate as PDF/A-3u modulo an error in white-space handling in the pdfTeX version used to generate i

    Holographic and 3D teleconferencing and visualization: implications for terabit networked applications

    Get PDF
    Abstract not available
    corecore