75,566 research outputs found
Queries on LZ-Bounded Encodings
We describe a data structure that stores a string in space similar to
that of its Lempel-Ziv encoding and efficiently supports access, rank and
select queries. These queries are fundamental for implementing succinct and
compressed data structures, such as compressed trees and graphs. We show that
our data structure can be built in a scalable manner and is both small and fast
in practice compared to other data structures supporting such queries
Ideograph: A Language for Expressing and Manipulating Structured Data
We introduce Ideograph, a language for expressing and manipulating structured
data. Its types describe kinds of structures, such as natural numbers, lists,
multisets, binary trees, syntax trees with variable binding, directed
multigraphs, and relational databases. Fully normalized terms of a type
correspond exactly to members of the structure, analogous to a Church-encoding.
Moreover, definable operations over these structures are guaranteed to respect
the structures' equivalences. In this paper, we give the syntax and semantics
of the non-polymorphic subset of Ideograph, and we demonstrate how it can
represent and manipulate several interesting structures.Comment: In Proceedings TERMGRAPH 2022, arXiv:2303.1421
Treebank annotation schemes and parser evaluation for German
Recent studies focussed on the question whether less-congurational languages like German are harder to parse than English, or whether the lower parsing scores are an
artefact of treebank encoding schemes and data structures, as claimed by K¨ubler et al. (2006). This claim is based on the assumption that PARSEVAL metrics fully reflect parse quality across treebank encoding schemes. In this paper we present new experiments to test this claim. We use the
PARSEVAL metric, the Leaf-Ancestor metric as well as a dependency-based evaluation, and present novel approaches measuring the effect of controlled error insertion on treebank trees and parser output. We also provide extensive past-parsing crosstreebank conversion. The results of the experiments show that, contrary to K¨ubler et al. (2006), the question whether or not German is harder to parse than English remains undecided
On the suitability of suffix arrays for lempel-ziv data compression
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical
Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality.
In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored.
In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors.
Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeature-based connectionist representations
LRM-Trees: Compressed Indices, Adaptive Sorting, and Compressed Permutations
LRM-Trees are an elegant way to partition a sequence of values into sorted
consecutive blocks, and to express the relative position of the first element
of each block within a previous block. They were used to encode ordinal trees
and to index integer arrays in order to support range minimum queries on them.
We describe how they yield many other convenient results in a variety of areas,
from data structures to algorithms: some compressed succinct indices for range
minimum queries; a new adaptive sorting algorithm; and a compressed succinct
data structure for permutations supporting direct and indirect application in
time all the shortest as the permutation is compressible.Comment: 13 pages, 1 figur
On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression
Much research has been devoted to optimizing algorithms of the Lempel-Ziv
(LZ) 77 family, both in terms of speed and memory requirements. Binary search
trees and suffix trees (ST) are data structures that have been often used for
this purpose, as they allow fast searches at the expense of memory usage.
In recent years, there has been interest on suffix arrays (SA), due to their
simplicity and low memory requirements. One key issue is that an SA can solve
the sub-string problem almost as efficiently as an ST, using less memory. This
paper proposes two new SA-based algorithms for LZ encoding, which require no
modifications on the decoder side. Experimental results on standard benchmarks
show that our algorithms, though not faster, use 3 to 5 times less memory than
the ST counterparts. Another important feature of our SA-based algorithms is
that the amount of memory is independent of the text to search, thus the memory
that has to be allocated can be defined a priori. These features of low and
predictable memory requirements are of the utmost importance in several
scenarios, such as embedded systems, where memory is at a premium and speed is
not critical. Finally, we point out that the new algorithms are general, in the
sense that they are adequate for applications other than LZ compression, such
as text retrieval and forward/backward sub-string search.Comment: 10 pages, submited to IEEE - Data Compression Conference 200
Combined Data Structure for Previous- and Next-Smaller-Values
Let be a static array storing elements from a totally ordered set. We
present a data structure of optimal size at most
bits that allows us to answer the following queries on in constant time,
without accessing : (1) previous smaller value queries, where given an index
, we wish to find the first index to the left of where is strictly
smaller than at , and (2) next smaller value queries, which search to the
right of . As an additional bonus, our data structure also allows to answer
a third kind of query: given indices , find the position of the minimum in
. Our data structure has direct consequences for the space-efficient
storage of suffix trees.Comment: to appear in Theoretical Computer Scienc
On Tree-Based Neural Sentence Modeling
Neural networks with tree-based sentence encoders have shown better results
on many downstream tasks. Most of existing tree-based encoders adopt syntactic
parsing trees as the explicit structure prior. To study the effectiveness of
different tree structures, we replace the parsing trees with trivial trees
(i.e., binary balanced tree, left-branching tree and right-branching tree) in
the encoders. Though trivial trees contain no syntactic information, those
encoders get competitive or even better results on all of the ten downstream
tasks we investigated. This surprising result indicates that explicit syntax
guidance may not be the main contributor to the superior performances of
tree-based neural sentence modeling. Further analysis show that tree modeling
gives better results when crucial words are closer to the final representation.
Additional experiments give more clues on how to design an effective tree-based
encoder. Our code is open-source and available at
https://github.com/ExplorerFreda/TreeEnc.Comment: To Appear at EMNLP 201
- …