4,178 research outputs found
ALMA: Automata Learner using Modulo 2 Multiplicity Automata
We present ALMA (Automata Learner using modulo 2 Multiplicity Automata), a
Java-based tool that can learn any automaton accepting regular languages of
finite or infinite words with an implementable membership query function. Users
can either pass as input their own membership query function, or use the
predefined membership query functions for modulo 2 multiplicity automata and
non-deterministic B\"uchi automata. While learning, ALMA can output the state
of the observation table after every equivalence query, and upon termination,
it can output the dimension, transition matrices, and final vector of the
learned modulo 2 multiplicity automaton. Users can test whether a word is
accepted by performing a membership query on the learned automaton.
ALMA follows the polynomial-time learning algorithm of Beimel et. al.
(Learning functions represented as multiplicity automata. J. ACM 47(3), 2000),
which uses membership and equivalence queries and represents hypotheses using
modulo 2 multiplicity automata. ALMA also implements a polynomial-time learning
algorithm for strongly unambiguous B\"uchi automata by Angluin et. al.
(Strongly unambiguous B\"uchi automata are polynomially predictable with
membership queries. CSL 2020), and a minimization algorithm for modulo 2
multiplicity automata by Sakarovitch (Elements of Automata Theory. 2009)
Complexity of Equivalence and Learning for Multiplicity Tree Automata
We consider the complexity of equivalence and learning for multiplicity tree
automata, i.e., weighted tree automata over a field. We first show that the
equivalence problem is logspace equivalent to polynomial identity testing, the
complexity of which is a longstanding open problem. Secondly, we derive lower
bounds on the number of queries needed to learn multiplicity tree automata in
Angluin's exact learning model, over both arbitrary and fixed fields.
Habrard and Oncina (2006) give an exact learning algorithm for multiplicity
tree automata, in which the number of queries is proportional to the size of
the target automaton and the size of a largest counterexample, represented as a
tree, that is returned by the Teacher. However, the smallest
tree-counterexample may be exponential in the size of the target automaton.
Thus the above algorithm does not run in time polynomial in the size of the
target automaton, and has query complexity exponential in the lower bound.
Assuming a Teacher that returns minimal DAG representations of
counterexamples, we give a new exact learning algorithm whose query complexity
is quadratic in the target automaton size, almost matching the lower bound, and
improving the best previously-known algorithm by an exponential factor
Planning in POMDPs Using Multiplicity Automata
Planning and learning in Partially Observable MDPs (POMDPs) are among the
most challenging tasks in both the AI and Operation Research communities.
Although solutions to these problems are intractable in general, there might be
special cases, such as structured POMDPs, which can be solved efficiently. A
natural and possibly efficient way to represent a POMDP is through the
predictive state representation (PSR) - a representation which recently has
been receiving increasing attention. In this work, we relate POMDPs to
multiplicity automata- showing that POMDPs can be represented by multiplicity
automata with no increase in the representation size. Furthermore, we show that
the size of the multiplicity automaton is equal to the rank of the predictive
state representation. Therefore, we relate both the predictive state
representation and POMDPs to the well-founded multiplicity automata literature.
Based on the multiplicity automata representation, we provide a planning
algorithm which is exponential only in the multiplicity automata rank rather
than the number of states of the POMDP. As a result, whenever the predictive
state representation is logarithmic in the standard POMDP representation, our
planning algorithm is efficient.Comment: Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005
Minimization via duality
We show how to use duality theory to construct minimized versions of a wide class of automata. We work out three cases in detail: (a variant of) ordinary automata, weighted automata and probabilistic automata. The basic idea is that instead of constructing a maximal quotient we go to the dual and look for a minimal subalgebra and then return to the original category. Duality ensures that the minimal subobject becomes the maximally quotiented object
Rational stochastic languages
The goal of the present paper is to provide a systematic and comprehensive
study of rational stochastic languages over a semiring K \in {Q, Q +, R, R+}. A
rational stochastic language is a probability distribution over a free monoid
\Sigma^* which is rational over K, that is which can be generated by a
multiplicity automata with parameters in K. We study the relations between the
classes of rational stochastic languages S rat K (\Sigma). We define the notion
of residual of a stochastic language and we use it to investigate properties of
several subclasses of rational stochastic languages. Lastly, we study the
representation of rational stochastic languages by means of multiplicity
automata.Comment: 35 page
On the exact learnability of graph parameters: The case of partition functions
We study the exact learnability of real valued graph parameters which are
known to be representable as partition functions which count the number of
weighted homomorphisms into a graph with vertex weights and edge
weights . M. Freedman, L. Lov\'asz and A. Schrijver have given a
characterization of these graph parameters in terms of the -connection
matrices of . Our model of learnability is based on D. Angluin's
model of exact learning using membership and equivalence queries. Given such a
graph parameter , the learner can ask for the values of for graphs of
their choice, and they can formulate hypotheses in terms of the connection
matrices of . The teacher can accept the hypothesis as correct, or
provide a counterexample consisting of a graph. Our main result shows that in
this scenario, a very large class of partition functions, the rigid partition
functions, can be learned in time polynomial in the size of and the size of
the largest counterexample in the Blum-Shub-Smale model of computation over the
reals with unit cost.Comment: 14 pages, full version of the MFCS 2016 conference pape
Learning probability distributions generated by finite-state machines
We review methods for inference of probability distributions generated by probabilistic automata and related models for sequence generation. We focus on methods that can be proved to learn in the inference
in the limit and PAC formal models. The methods we review are state merging and state splitting methods for probabilistic deterministic automata and the recently developed spectral method for nondeterministic probabilistic automata. In both cases, we derive them from a high-level algorithm described in terms of the Hankel matrix of the distribution to be learned, given as an oracle, and then describe how to adapt that algorithm to account for the error introduced by a finite sample.Peer ReviewedPostprint (author's final draft
Learning Multipicity Tree Automata
International audienceIn this paper, we present a theoretical approach for the problem of learning multiplicity tree automata. These automata allows one to define functions which compute a number for each tree. They can be seen as a strict generalization of stochastic tree automata since they allow to define functions over any field K. A multiplicity automaton admits a support which is a non deterministic automaton. From a grammatical inference point of view, this paper presents a contribution which is original due to the combination of two important aspects. This is the first time, as far as we now, that a learning method focuses on non deterministic tree automata which computes functions over a field. The algorithm proposed in this paper stands in Angluin's exact model where a learner is allowed to use membership and equivalence queries. We show that this algorithm is polynomial in time in function of the size of the representation
Game Theory: The Language of Social Science?
The present paper tries in a largely non-technical way to discuss the aim, the basic notions and methods as well as the limits of game theory under the aspect of providing a general modelling method or language for social sciences.
- …