107,872 research outputs found
Finite Automata for the Sub- and Superword Closure of CFLs: Descriptional and Computational Complexity
We answer two open questions by (Gruber, Holzer, Kutrib, 2009) on the
state-complexity of representing sub- or superword closures of context-free
grammars (CFGs): (1) We prove a (tight) upper bound of on
the size of nondeterministic finite automata (NFAs) representing the subword
closure of a CFG of size . (2) We present a family of CFGs for which the
minimal deterministic finite automata representing their subword closure
matches the upper-bound of following from (1).
Furthermore, we prove that the inequivalence problem for NFAs representing sub-
or superword-closed languages is only NP-complete as opposed to PSPACE-complete
for general NFAs. Finally, we extend our results into an approximation method
to attack inequivalence problems for CFGs
DNA ANALYSIS USING GRAMMATICAL INFERENCE
An accurate language definition capable of distinguishing between coding and non-coding DNA has important applications and analytical significance to the field of computational biology. The method proposed here uses positive sample grammatical inference and statistical information to infer languages for coding DNA.
An algorithm is proposed for the searching of an optimal subset of input sequences for the inference of regular grammars by optimizing a relevant accuracy metric. The algorithm does not guarantee the finding of the optimal subset; however, testing shows improvement in accuracy and performance over the basis algorithm.
Testing shows that the accuracy of inferred languages for components of DNA are consistently accurate. By using the proposed algorithm languages are inferred for coding DNA with average conditional probability over 80%. This reveals that languages for components of DNA can be inferred and are useful independent of the process that created them. These languages can then be analyzed or used for other tasks in computational biology.
To illustrate potential applications of regular grammars for DNA components, an inferred language for exon sequences is applied as post processing to Hidden Markov exon prediction to reduce the number of wrong exons detected and improve the specificity of the model significantly
Hypothesis Testing based Intrinsic Evaluation of Word Embeddings
We introduce the cross-match test - an exact, distribution free,
high-dimensional hypothesis test as an intrinsic evaluation metric for word
embeddings. We show that cross-match is an effective means of measuring
distributional similarity between different vector representations and of
evaluating the statistical significance of different vector embedding models.
Additionally, we find that cross-match can be used to provide a quantitative
measure of linguistic similarity for selecting bridge languages for machine
translation. We demonstrate that the results of the hypothesis test align with
our expectations and note that the framework of two sample hypothesis testing
is not limited to word embeddings and can be extended to all vector
representations.Comment: Accepted to RepEval 2017: The Second Workshop on Evaluating Vector
Space Representations for NL
Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
Previously referred to as `miraculous' in the scientific literature because
of its powerful properties and its wide application as optimal solution to the
problem of induction/inference, (approximations to) Algorithmic Probability
(AP) and the associated Universal Distribution are (or should be) of the
greatest importance in science. Here we investigate the emergence, the rates of
emergence and convergence, and the Coding-theorem like behaviour of AP in
Turing-subuniversal models of computation. We investigate empirical
distributions of computing models in the Chomsky hierarchy. We introduce
measures of algorithmic probability and algorithmic complexity based upon
resource-bounded computation, in contrast to previously thoroughly investigated
distributions produced from the output distribution of Turing machines. This
approach allows for numerical approximations to algorithmic
(Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a
computational hierarchy. We demonstrate that all these estimations are
correlated in rank and that they converge both in rank and values as a function
of computational power, despite fundamental differences between computational
models. In the context of natural processes that operate below the Turing
universal level because of finite resources and physical degradation, the
investigation of natural biases stemming from algorithmic rules may shed light
on the distribution of outcomes. We show that up to 60\% of the
simplicity/complexity bias in distributions produced even by the weakest of the
computational models can be accounted for by Algorithmic Probability in its
approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity
calculator: http://complexitycalculator.com
One-Tape Turing Machine Variants and Language Recognition
We present two restricted versions of one-tape Turing machines. Both
characterize the class of context-free languages. In the first version,
proposed by Hibbard in 1967 and called limited automata, each tape cell can be
rewritten only in the first visits, for a fixed constant .
Furthermore, for deterministic limited automata are equivalent to
deterministic pushdown automata, namely they characterize deterministic
context-free languages. Further restricting the possible operations, we
consider strongly limited automata. These models still characterize
context-free languages. However, the deterministic version is less powerful
than the deterministic version of limited automata. In fact, there exist
deterministic context-free languages that are not accepted by any deterministic
strongly limited automaton.Comment: 20 pages. This article will appear in the Complexity Theory Column of
the September 2015 issue of SIGACT New
Minimally-Supervised Morphological Segmentation using Adaptor Grammars
This paper explores the use of Adaptor Grammars, a nonparametric Bayesian modelling framework, for minimally supervised morphological segmentation. We compare three training methods: unsupervised training, semi-supervised training, and a novel model selection method. In the model selection method, we train unsupervised Adaptor Grammars using an over-articulated metagrammar, then use a small labelled data set to select which potential morph boundaries identified by the metagrammar should be returned in the final output. We evaluate on five languages and show that semi-supervised training provides a boost over unsupervised training, while the model selection method yields the best average results over all languages and is competitive with state-of-the-art semi-supervised systems. Moreover, this method provides the potential to tune performance according to different evaluation metrics or downstream tasks.12 page(s
- …