4,570 research outputs found

    A simple online competitive adaptation of Lempel-Ziv compression with efficient random access support

    Get PDF
    We present a simple adaptation of the Lempel Ziv 78' (LZ78) compression scheme ({\em IEEE Transactions on Information Theory, 1978}) that supports efficient random access to the input string. Namely, given query access to the compressed string, it is possible to efficiently recover any symbol of the input string. The compression algorithm is given as input a parameter \eps >0, and with very high probability increases the length of the compressed string by at most a factor of (1+\eps). The access time is O(\log n + 1/\eps^2) in expectation, and O(\log n/\eps^2) with high probability. The scheme relies on sparse transitive-closure spanners. Any (consecutive) substring of the input string can be retrieved at an additional additive cost in the running time of the length of the substring. We also formally establish the necessity of modifying LZ78 so as to allow efficient random access. Specifically, we construct a family of strings for which Ω(n/log⁑n)\Omega(n/\log n) queries to the LZ78-compressed string are required in order to recover a single symbol in the input string. The main benefit of the proposed scheme is that it preserves the online nature and simplicity of LZ78, and that for {\em every} input string, the length of the compressed string is only a small factor larger than that obtained by running LZ78

    NEEXP is Contained in MIP*

    Get PDF
    We study multiprover interactive proof systems. The power of classical multiprover interactive proof systems, in which the provers do not share entanglement, was characterized in a famous work by Babai, Fortnow, and Lund (Computational Complexity 1991), whose main result was the equality MIP = NEXP. The power of quantum multiprover interactive proof systems, in which the provers are allowed to share entanglement, has proven to be much more difficult to characterize. The best known lower-bound on MIP* is NEXP βŠ† MIP*, due to Ito and Vidick (FOCS 2012). As for upper bounds, MIP* could be as large as RE, the class of recursively enumerable languages. The main result of this work is the inclusion of NEEXP = NTIME[2^(2poly(n))] βŠ† MIP*. This is an exponential improvement over the prior lower bound and shows that proof systems with entangled provers are at least exponentially more powerful than classical provers. In our protocol the verifier delegates a classical, exponentially large MIP protocol for NEEXP to two entangled provers: the provers obtain their exponentially large questions by measuring their shared state, and use a classical PCP to certify the correctness of their exponentially-long answers. For the soundness of our protocol, it is crucial that each player should not only sample its own question correctly but also avoid performing measurements that would reveal the other player's sampled question. We ensure this by commanding the players to perform a complementary measurement, relying on the Heisenberg uncertainty principle to prevent the forbidden measurements from being performed

    κ°„κ²°ν•œ 자료ꡬ쑰λ₯Ό ν™œμš©ν•œ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹λ“€μ˜ 곡간 효율적 ν‘œν˜„λ²•

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2021. 2. Srinivasa Rao Satti.Numerous big data are generated from a plethora of sources. Most of the data stored as files contain a non-fixed type of schema, so that the files are suitable to be maintained as semi-structured document formats. A number of those formats, such as XML (eXtensible Markup Language), JSON (JavaScript Object Notation), and YAML (YAML Ain't Markup Language) are suggested to sustain hierarchy in the original corpora of data. Several data models structuring the gathered data - including RDF (Resource Description Framework) - depend on the semi-structured document formats to be serialized and transferred for future processing. Since the semi-structured document formats focus on readability and verbosity, redundant space is required to organize and maintain the document. Even though general-purpose compression schemes are widely used to compact the documents, applying those algorithms hinder future handling of the corpora, owing to loss of internal structures. The area of succinct data structures is widely investigated and researched in theory, to provide answers to the queries while the encoded data occupy space close to the information-theoretic lower bound. Bit vectors and trees are the notable succinct data structures. Nevertheless, there were few attempts to apply the idea of succinct data structures to represent the semi-structured documents in space-efficient manner. In this dissertation we propose a unified, space-efficient representation of various semi-structured document formats. The core functionality of this representation is its compactness and query-ability derived from enriched functions of succinct data structures. Incorporation of (a) bit indexed arrays, (b) succinct ordinal trees, and (c) compression techniques engineers the compact representation. We implement this representation in practice, and show by experiments that construction of this representation decreases the disk usage by up to 60% while occupying 90% less RAM. We also allow processing a document in partial manner, to allow processing of larger corpus of big data even in the constrained environment. In parallel to establishing the aforementioned compact semi-structured document representation, we provide and reinforce some of the existing compression schemes in this dissertation. We first suggest an idea to encode an array of integers that is not necessarily sorted. This compaction scheme improves upon the existing universal code systems, by assistance of succinct bit vector structure. We show that our suggested algorithm reduces space usage by up to 44% while consuming 15% less time than the original code system, while the algorithm additionally supports random access of elements upon the encoded array. We also reinforce the SBH bitmap index compression algorithm. The main strength of this scheme is the use of intermediate super-bucket during operations, giving better performance on querying through a combination of compressed bitmap indexes. Inspired from splits done during the intermediate process of the SBH algorithm, we give an improved compression mechanism supporting parallelism that could be utilized in both CPUs and GPUs. We show by experiments that this CPU parallel processing optimization diminishes compression and decompression times by up to 38% in a 4-core machine without modifying the bitmap compressed form. For GPUs, the new algorithm gives 48% faster query processing time in the experiments, compared to the previous existing bitmap index compression schemes.μ…€ 수 μ—†λŠ” λΉ… 데이터가 λ‹€μ–‘ν•œ μ›λ³Έλ‘œλΆ€ν„° μƒμ„±λ˜κ³  μžˆλ‹€. 이듀 λ°μ΄ν„°μ˜ λŒ€λΆ€λΆ„μ€ κ³ μ •λ˜μ§€ μ•Šμ€ μ’…λ₯˜μ˜ μŠ€ν‚€λ§ˆλ₯Ό ν¬ν•¨ν•œ 파일 ν˜•νƒœλ‘œ μ €μž₯λ˜λŠ”λ°, 이둜 μΈν•˜μ—¬ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹μ„ μ΄μš©ν•˜μ—¬ νŒŒμΌμ„ μœ μ§€ν•˜λŠ” 것이 μ ν•©ν•˜λ‹€. XML, JSON 및 YAMLκ³Ό 같은 μ’…λ₯˜μ˜ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹μ΄ 데이터에 λ‚΄μž¬ν•˜λŠ” ꡬ쑰λ₯Ό μœ μ§€ν•˜κΈ° μœ„ν•˜μ—¬ μ œμ•ˆλ˜μ—ˆλ‹€. μˆ˜μ§‘λœ 데이터λ₯Ό κ΅¬μ‘°ν™”ν•˜λŠ” RDF와 같은 μ—¬λŸ¬ 데이터 λͺ¨λΈλ“€μ€ 사후 처리λ₯Ό μœ„ν•œ μ €μž₯ 및 전솑을 μœ„ν•˜μ—¬ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹μ— μ˜μ‘΄ν•œλ‹€. λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹μ€ 가독성과 닀변성에 μ§‘μ€‘ν•˜κΈ° λ•Œλ¬Έμ—, λ¬Έμ„œλ₯Ό κ΅¬μ‘°ν™”ν•˜κ³  μœ μ§€ν•˜κΈ° μœ„ν•˜μ—¬ 좔가적인 곡간을 ν•„μš”λ‘œ ν•œλ‹€. λ¬Έμ„œλ₯Ό μ••μΆ•μ‹œν‚€κΈ° μœ„ν•˜μ—¬ 일반적인 μ••μΆ• 기법듀이 널리 μ‚¬μš©λ˜κ³  μžˆμœΌλ‚˜, 이듀 기법듀을 μ μš©ν•˜κ²Œ 되면 λ¬Έμ„œμ˜ λ‚΄λΆ€ ꡬ쑰의 μ†μ‹€λ‘œ μΈν•˜μ—¬ λ°μ΄ν„°μ˜ 사후 μ²˜λ¦¬κ°€ μ–΄λ ΅κ²Œ λœλ‹€. 데이터λ₯Ό 정보이둠적 ν•˜ν•œμ— κ°€κΉŒμš΄ κ³΅κ°„λ§Œμ„ μ‚¬μš©ν•˜μ—¬ μ €μž₯을 κ°€λŠ₯ν•˜κ²Œ ν•˜λ©΄μ„œ μ§ˆμ˜μ— λŒ€ν•œ 응닡을 μ œκ³΅ν•˜λŠ” κ°„κ²°ν•œ μžλ£Œκ΅¬μ‘°λŠ” 이둠적으둜 널리 μ—°κ΅¬λ˜κ³  μžˆλŠ” 뢄야이닀. λΉ„νŠΈμ—΄κ³Ό νŠΈλ¦¬κ°€ 널리 μ•Œλ €μ§„ κ°„κ²°ν•œ μžλ£Œκ΅¬μ‘°λ“€μ΄λ‹€. κ·ΈλŸ¬λ‚˜ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œλ“€μ„ μ €μž₯ν•˜λŠ” 데 κ°„κ²°ν•œ 자료ꡬ쑰의 아이디어λ₯Ό μ μš©ν•œ μ—°κ΅¬λŠ” 거의 μ§„ν–‰λ˜μ§€ μ•Šμ•˜λ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ„ 톡해 μš°λ¦¬λŠ” λ‹€μ–‘ν•œ μ’…λ₯˜μ˜ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν˜•μ‹μ„ ν†΅μΌλ˜κ²Œ ν‘œν˜„ν•˜λŠ” 곡간 효율적 ν‘œν˜„λ²•μ„ μ œμ‹œν•œλ‹€. 이 κΈ°λ²•μ˜ μ£Όμš”ν•œ κΈ°λŠ₯은 κ°„κ²°ν•œ μžλ£Œκ΅¬μ‘°κ°€ κ°•μ μœΌλ‘œ κ°€μ§€λŠ” νŠΉμ„±μ— κΈ°λ°˜ν•œ κ°„κ²°μ„±κ³Ό 질의 κ°€λŠ₯성이닀. λΉ„νŠΈμ—΄λ‘œ μΈλ±μ‹±λœ λ°°μ—΄, κ°„κ²°ν•œ μˆœμ„œ μžˆλŠ” 트리 및 λ‹€μ–‘ν•œ μ••μΆ• 기법을 ν†΅ν•©ν•˜μ—¬ ν•΄λ‹Ή ν‘œν˜„λ²•μ„ κ³ μ•ˆν•˜μ˜€λ‹€. 이 기법은 μ‹€μž¬μ μœΌλ‘œ κ΅¬ν˜„λ˜μ—ˆκ³ , μ‹€ν—˜μ„ ν†΅ν•˜μ—¬ 이 기법을 μ μš©ν•œ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œλ“€μ€ μ΅œλŒ€ 60% 적은 λ””μŠ€ν¬ 곡간과 90% 적은 λ©”λͺ¨λ¦¬ 곡간을 톡해 ν‘œν˜„λ  수 μžˆλ‹€λŠ” 것을 보인닀. λ”λΆˆμ–΄ λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œ λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œλ“€μ€ λΆ„ν• μ μœΌλ‘œ ν‘œν˜„μ΄ κ°€λŠ₯함을 보이고, 이λ₯Ό ν†΅ν•˜μ—¬ μ œν•œλœ ν™˜κ²½μ—μ„œλ„ λΉ… 데이터λ₯Ό ν‘œν˜„ν•œ λ¬Έμ„œλ“€μ„ μ²˜λ¦¬ν•  수 μžˆλ‹€λŠ” 것을 보인닀. μ•žμ„œ μ–ΈκΈ‰ν•œ 곡간 효율적 λ°˜κ΅¬μ‘°ν™”λœ λ¬Έμ„œ ν‘œν˜„λ²•μ„ ꡬ좕함과 λ™μ‹œμ—, λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œ 이미 μ‘΄μž¬ν•˜λŠ” μ••μΆ• 기법 쀑 일뢀λ₯Ό μΆ”κ°€μ μœΌλ‘œ κ°œμ„ ν•œλ‹€. 첫째둜, λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” μ •λ ¬ 여뢀에 κ΄€κ³„μ—†λŠ” μ •μˆ˜ 배열을 λΆ€ν˜Έν™”ν•˜λŠ” 아이디어λ₯Ό μ œμ‹œν•œλ‹€. 이 기법은 이미 μ‘΄μž¬ν•˜λŠ” λ²”μš© μ½”λ“œ μ‹œμŠ€ν…œμ„ κ°œμ„ ν•œ ν˜•νƒœλ‘œ, κ°„κ²°ν•œ λΉ„νŠΈμ—΄ 자료ꡬ쑰λ₯Ό μ΄μš©ν•œλ‹€. μ œμ•ˆλœ μ•Œκ³ λ¦¬μ¦˜μ€ κΈ°μ‘΄ λ²”μš© μ½”λ“œ μ‹œμŠ€ν…œμ— λΉ„ν•΄ μ΅œλŒ€ 44\% 적은 곡간을 μ‚¬μš©ν•  뿐만 μ•„λ‹ˆλΌ 15\% 적은 λΆ€ν˜Έν™” μ‹œκ°„μ„ ν•„μš”λ‘œ ν•˜λ©°, κΈ°μ‘΄ μ‹œμŠ€ν…œμ—μ„œ μ œκ³΅ν•˜μ§€ μ•ŠλŠ” λΆ€ν˜Έν™”λœ λ°°μ—΄μ—μ„œμ˜ μž„μ˜ 접근을 μ§€μ›ν•œλ‹€. λ˜ν•œ λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œλŠ” λΉ„νŠΈλ§΅ 인덱슀 압좕에 μ‚¬μš©λ˜λŠ” SBH μ•Œκ³ λ¦¬μ¦˜μ„ κ°œμ„ μ‹œν‚¨λ‹€. ν•΄λ‹Ή κΈ°λ²•μ˜ 주된 강점은 λΆ€ν˜Έν™”μ™€ λ³΅ν˜Έν™” 진행 μ‹œ 쀑간 맀개인 μŠˆνΌλ²„μΌ“μ„ μ‚¬μš©ν•¨μœΌλ‘œμ¨ μ—¬λŸ¬ μ••μΆ•λœ λΉ„νŠΈλ§΅ μΈλ±μŠ€μ— λŒ€ν•œ 질의 μ„±λŠ₯을 κ°œμ„ μ‹œν‚€λŠ” 것이닀. μœ„ μ••μΆ• μ•Œκ³ λ¦¬μ¦˜μ˜ 쀑간 κ³Όμ •μ—μ„œ μ§„ν–‰λ˜λŠ” λΆ„ν• μ—μ„œ μ˜κ°μ„ μ–»μ–΄, λ³Έ ν•™μœ„λ…Όλ¬Έμ—μ„œ CPU 및 GPU에 적용 κ°€λŠ₯ν•œ κ°œμ„ λœ 병렬화 μ••μΆ• λ§€μ»€λ‹ˆμ¦˜μ„ μ œμ‹œν•œλ‹€. μ‹€ν—˜μ„ 톡해 CPU 병렬 μ΅œμ ν™”κ°€ 이루어진 μ•Œκ³ λ¦¬μ¦˜μ€ μ••μΆ•λœ ν˜•νƒœμ˜ λ³€ν˜• 없이 4μ½”μ–΄ μ»΄ν“¨ν„°μ—μ„œ μ΅œλŒ€ 38\%의 μ••μΆ• 및 ν•΄μ œ μ‹œκ°„μ„ κ°μ†Œμ‹œν‚¨λ‹€λŠ” 것을 보인닀. GPU 병렬 μ΅œμ ν™”λŠ” 기쑴에 μ‘΄μž¬ν•˜λŠ” GPU λΉ„νŠΈλ§΅ μ••μΆ• 기법에 λΉ„ν•΄ 48\% λΉ λ₯Έ 질의 처리 μ‹œκ°„μ„ ν•„μš”λ‘œ 함을 ν™•μΈν•œλ‹€.Chapter 1 Introduction 1 1.1 Contribution 3 1.2 Organization 5 Chapter 2 Background 6 2.1 Model of Computation 6 2.2 Succinct Data Structures 7 Chapter 3 Space-efficient Representation of Integer Arrays 9 3.1 Introduction 9 3.2 Preliminaries 10 3.2.1 Universal Code System 10 3.2.2 Bit Vector 13 3.3 Algorithm Description 13 3.3.1 Main Principle 14 3.3.2 Optimization in the Implementation 16 3.4 Experimental Results 16 Chapter 4 Space-efficient Parallel Compressed Bitmap Index Processing 19 4.1 Introduction 19 4.2 Related Work 23 4.2.1 Byte-aligned Bitmap Code (BBC) 24 4.2.2 Word-Aligned Hybrid (WAH) 27 4.2.3 WAH-derived Algorithms 28 4.2.4 GPU-based WAH Algorithms 31 4.2.5 Super Byte-aligned Hybrid (SBH) 33 4.3 Parallelizing SBH 38 4.3.1 CPU Parallelism 38 4.3.2 GPU Parallelism 39 4.4 Experimental Results 40 4.4.1 Plain Version 41 4.4.2 Parallelized Version 46 4.4.3 Summary 49 Chapter 5 Space-efficient Representation of Semi-structured Document Formats 50 5.1 Preliminaries 50 5.1.1 Semi-structured Document Formats 50 5.1.2 Resource Description Framework 57 5.1.3 Succinct Ordinal Tree Representations 60 5.1.4 String Compression Schemes 64 5.2 Representation 66 5.2.1 Bit String Indexed Array 67 5.2.2 Main Structure 68 5.2.3 Single Document as a Collection of Chunks 72 5.2.4 Supporting Queries 73 5.3 Experimental Results 75 5.3.1 Datasets 76 5.3.2 Construction Time 78 5.3.3 RAM Usage during Construction 80 5.3.4 Disk Usage and Serialization Time 83 5.3.5 Chunk Division 83 5.3.6 String Compression 88 5.3.7 Query Time 89 Chapter 6 Conclusion 94 Bibliography 96 μš”μ•½ 109 Acknowledgements 111Docto

    LeCo: Lightweight Compression via Learning Serial Correlations

    Full text link
    Lightweight data compression is a key technique that allows column stores to exhibit superior performance for analytical queries. Despite a comprehensive study on dictionary-based encodings to approach Shannon's entropy, few prior works have systematically exploited the serial correlation in a column for compression. In this paper, we propose LeCo (i.e., Learned Compression), a framework that uses machine learning to remove the serial redundancy in a value sequence automatically to achieve an outstanding compression ratio and decompression performance simultaneously. LeCo presents a general approach to this end, making existing (ad-hoc) algorithms such as Frame-of-Reference (FOR), Delta Encoding, and Run-Length Encoding (RLE) special cases under our framework. Our microbenchmark with three synthetic and six real-world data sets shows that a prototype of LeCo achieves a Pareto improvement on both compression ratio and random access speed over the existing solutions. When integrating LeCo into widely-used applications, we observe up to 3.9x speed up in filter-scanning a Parquet file and a 16% increase in Rocksdb's throughput

    A Space-Optimal Grammar Compression

    Get PDF
    A grammar compression is a context-free grammar (CFG) deriving a single string deterministically. For an input string of length N over an alphabet of size sigma, the smallest CFG is O(log N)-approximable in the offline setting and O(log N log^* N)-approximable in the online setting. In addition, an information-theoretic lower bound for representing a CFG in Chomsky normal form of n variables is log (n!/n^sigma) + n + o(n) bits. Although there is an online grammar compression algorithm that directly computes the succinct encoding of its output CFG with O(log N log^* N) approximation guarantee, the problem of optimizing its working space has remained open. We propose a fully-online algorithm that requires the fewest bits of working space asymptotically equal to the lower bound in O(N log log n) compression time. In addition we propose several techniques to boost grammar compression and show their efficiency by computational experiments

    GraCT: A Grammar based Compressed representation of Trajectories

    Get PDF
    We present a compressed data structure to store free trajectories of moving objects (ships over the sea, for example) allowing spatio-temporal queries. Our method, GraCT, uses a k2k^2-tree to store the absolute positions of all objects at regular time intervals (snapshots), whereas the positions between snapshots are represented as logs of relative movements compressed with Re-Pair. Our experimental evaluation shows important savings in space and time with respect to a fair baseline.Comment: This research has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie Actions H2020-MSCA-RISE-2015 BIRDS GA No. 69094
    • …
    corecore