282 research outputs found
Tree transducers, L systems, and two-way machines
A relationship between parallel rewriting systems and two-way machines is investigated. Restrictions on the “copying power” of these devices endow them with rich structuring and give insight into the issues of determinism, parallelism, and copying. Among the parallel rewriting systems considered are the top-down tree transducer; the generalized syntax-directed translation scheme and the ETOL system, and among the two-way machines are the tree-walking automaton, the two-way finite-state transducer, and (generalizations of) the one-way checking stack automaton. The. relationship of these devices to macro grammars is also considered. An effort is made .to provide a systematic survey of a number of existing results
The formal power of one-visit attribute grammars
An attribute grammar is one-visit if the attributes can be evaluated by walking through the derivation tree in such a way that each subtree is visited at most once. One-visit (1V) attribute grammars are compared with one-pass left-to-right (L) attribute grammars and with attribute grammars having only one synthesized attribute (1S).\ud
\ud
Every 1S attribute grammar can be made one-visit. One-visit attribute grammars are simply permutations of L attribute grammars; thus the classes of output sets of 1V and L attribute grammars coincide, and similarly for 1S and L-1S attribute grammars. In case all attribute values are trees, the translation realized by a 1V attribute grammar is the composition of the translation realized by a 1S attribute grammar with a deterministic top-down tree transduction, and vice versa; thus, using a result of Duske e.a., the class of output languages of 1V (or L) attribute grammars is the image of the class of IO macro tree languages under all deterministic top-down tree transductions
Recommended from our members
Formally justified and modular Bayesian inference for probabilistic programs
Probabilistic modelling offers a simple and coherent framework to describe the
real world in the face of uncertainty. Furthermore, by applying Bayes' rule
it is possible to use probabilistic models to make inferences about the state of
the world from partial observations. While traditionally probabilistic models
were constructed on paper, more recently the approach of probabilistic
programming enables users to write the models in executable languages resembling
computer programs and to freely mix them with deterministic code.
It has long been recognised that the semantics of programming languages is
complicated and the intuitive understanding that programmers have is often
inaccurate, resulting in difficult to understand bugs and unexpected program
behaviours. Programming languages are therefore studied in a rigorous way using
formal languages with mathematically defined semantics. Traditionally formal
semantics of probabilistic programs are defined using exact inference results,
but in practice exact Bayesian inference is not tractable and approximate
methods are used instead, posing a question of how the results of these
algorithms relate to the exact results. Correctness of such approximate methods
is usually argued somewhat less rigorously, without reference to a formal
semantics.
In this dissertation we formally develop denotational semantics for
probabilistic programs that correspond to popular sampling algorithms often used
in practice. The semantics is defined for an expressive typed lambda calculus
with higher-order functions and inductive types, extended with probabilistic
effects for sampling and conditioning, allowing continuous distributions and
unbounded likelihoods. It makes crucial use of the recently developed formalism
of quasi-Borel spaces to bring all these elements together. We provide semantics
corresponding to several variants of Markov chain Monte Carlo and Sequential
Monte Carlo methods and formally prove a notion of correctness for these
algorithms in the context of probabilistic programming.
We also show that the semantic construction can be directly mapped to an
implementation using established functional programming abstractions called
monad transformers. We develop a compact Haskell library for probabilistic
programming closely corresponding to the semantic construction, giving users a
high level of assurance in the correctness of the implementation. We also
demonstrate on a collection of benchmarks that the library offers performance
competitive with existing systems of similar scope.
An important property of our construction, both the semantics and the
implementation, is the high degree of modularity it offers. All the inference
algorithms are constructed by combining small building blocks in a setup where
the type system ensures correctness of compositions. We show that with basic
building blocks corresponding to vanilla Metropolis-Hastings and Sequential
Monte Carlo we can implement more advanced algorithms known in the literature,
such as Resample-Move Sequential Monte Carlo, Particle Marginal
Metropolis-Hastings, and Sequential Monte Carlo squared. These implementations
are very concise, reducing the effort required to produce them and the scope for
bugs. On top of that, our modular construction enables in some cases
deterministic testing of randomised inference algorithms, further increasing
reliability of the implementation.Engineering and Physical Sciences Research Council, Cambridge Trust, Cambridge-Tuebingen programm
Shortcut fusion rules for the derivation of circular and higher-order programs
Functional programs often combine separate parts using intermediate data structures for communicating results. Programs so defined are modular, easier to understand and maintain, but suffer from inefficiencies due to the generation of those gluing data structures. To eliminate such redundant data structures, some program transformation techniques have been proposed. One such technique is shortcut fusion, and has been studied in the context of both pure and monadic functional programs. In this paper, we study several shortcut fusion extensions, so that, alternatively, circular or higher-order programs are derived. These extensions are also provided for effect-free programs and monadic ones. Our work results in a set of generic calculation rules, that are widely applicable, and whose correctness is formally established.Fundação para a Ciência e a Tecnologi
Complexity Hierarchies Beyond Elementary
We introduce a hierarchy of fast-growing complexity classes and show its
suitability for completeness statements of many non elementary problems. This
hierarchy allows the classification of many decision problems with a
non-elementary complexity, which occur naturally in logic, combinatorics,
formal languages, verification, etc., with complexities ranging from simple
towers of exponentials to Ackermannian and beyond.Comment: Version 3 is the published version in TOCT 8(1:3), 2016. I will keep
updating the catalogue of problems from Section 6 in future revision
- …