112,559 research outputs found
A Strong Distillery
Abstract machines for the strong evaluation of lambda-terms (that is, under
abstractions) are a mostly neglected topic, despite their use in the
implementation of proof assistants and higher-order logic programming
languages. This paper introduces a machine for the simplest form of strong
evaluation, leftmost-outermost (call-by-name) evaluation to normal form,
proving it correct, complete, and bounding its overhead. Such a machine, deemed
Strong Milner Abstract Machine, is a variant of the KAM computing normal forms
and using just one global environment. Its properties are studied via a special
form of decoding, called a distillation, into the Linear Substitution Calculus,
neatly reformulating the machine as a standard micro-step strategy for explicit
substitutions, namely linear leftmost-outermost reduction, i.e., the extension
to normal form of linear head reduction. Additionally, the overhead of the
machine is shown to be linear both in the number of steps and in the size of
the initial term, validating its design. The study highlights two distinguished
features of strong machines, namely backtracking phases and their interactions
with abstractions and environments.Comment: Accepted at APLAS 201
Classical logic, continuation semantics and abstract machines
One of the goals of this paper is to demonstrate that denotational semantics is useful for operational issues like implementation of functional languages by abstract machines. This is exemplified in a tutorial way by studying the case of extensional untyped call-by-name λ-calculus with Felleisen's control operator 𝒞. We derive the transition rules for an abstract machine from a continuation semantics which appears as a generalization of the ¬¬-translation known from logic. The resulting abstract machine appears as an extension of Krivine's machine implementing head reduction. Though the result, namely Krivine's machine, is well known our method of deriving it from continuation semantics is new and applicable to other languages (as e.g. call-by-value variants). Further new results are that Scott's D∞-models are all instances of continuation models. Moreover, we extend our continuation semantics to Parigot's λμ-calculus from which we derive an extension of Krivine's machine for λμ-calculus. The relation between continuation semantics and the abstract machines is made precise by proving computational adequacy results employing an elegant method introduced by Pitts
An Intensional Concurrent Faithful Encoding of Turing Machines
The benchmark for computation is typically given as Turing computability; the
ability for a computation to be performed by a Turing Machine. Many languages
exploit (indirect) encodings of Turing Machines to demonstrate their ability to
support arbitrary computation. However, these encodings are usually by
simulating the entire Turing Machine within the language, or by encoding a
language that does an encoding or simulation itself. This second category is
typical for process calculi that show an encoding of lambda-calculus (often
with restrictions) that in turn simulates a Turing Machine. Such approaches
lead to indirect encodings of Turing Machines that are complex, unclear, and
only weakly equivalent after computation. This paper presents an approach to
encoding Turing Machines into intensional process calculi that is faithful,
reduction preserving, and structurally equivalent. The encoding is demonstrated
in a simple asymmetric concurrent pattern calculus before generalised to
simplify infinite terms, and to show encodings into Concurrent Pattern Calculus
and Psi Calculi.Comment: In Proceedings ICE 2014, arXiv:1410.701
CS 3200/5200: Theoretical Foundations of Computing
CS 3200/5200 is an introduction to (a) formal language and automata theory and (b) computability. For (a), we will examine mechanisms for defining syntax of languages and devices for recognizing languages. Along with the fundamentals of these two topics, the course will investigate the relationships between language definition mechanisms and language recognition devices. For (b), we will study decision problems, the Church-Turing thesis, the undecidability of the Halting Problem, and problem reduction and undecidability. The text will be the third edition of Languages and Machines: An Introduction to the Theory of Computer Science, by Thomas Sudkamp
The Simplest Non-Regular Deterministic Context-Free Language
We introduce a new notion of ?-simple problems for a class ? of decision problems (i.e. languages), w.r.t. a particular reduction. A problem is ?-simple if it can be reduced to each problem in ?. This can be viewed as a conceptual counterpart to ?-hard problems to which all problems in ? reduce. Our concrete example is the class of non-regular deterministic context-free languages (DCFL\u27), with a truth-table reduction by Mealy machines. The main technical result is a proof that the DCFL\u27 language L_# = {0^n1^n ? n ? 1} is DCFL\u27-simple, and can be thus viewed as one of the simplest languages in the class DCFL\u27, in a precise sense. The notion of DCFL\u27-simple languages is nontrivial: e.g., the language L_R = {wcw^R? w ? {a,b}^*} is not DCFL\u27-simple.
By describing an application in the area of neural networks (elaborated in another paper), we demonstrate that ?-simple problems under suitable reductions can provide a tool for expanding the lower-bound results known for single problems to the whole classes of problems
On Computational Small Steps and Big Steps: Refocusing for Outermost Reduction
We study the relationship between small-step semantics, big-step semantics and abstract machines, for programming languages that employ an outermost reduction strategy, i.e., languages where reductions near the root of the abstract syntax tree are performed before reductions near the leaves.In particular, we investigate how Biernacka and Danvy's syntactic correspondence and Reynolds's functional correspondence can be applied to inter-derive semantic specifications for such languages.The main contribution of this dissertation is three-fold:First, we identify that backward overlapping reduction rules in the small-step semantics cause the refocusing step of the syntactic correspondence to be inapplicable.Second, we propose two solutions to overcome this in-applicability: backtracking and rule generalization.Third, we show how these solutions affect the other transformations of the two correspondences.Other contributions include the application of the syntactic and functional correspondences to Boolean normalization.In particular, we show how to systematically derive a spectrum of normalization functions for negational and conjunctive normalization
Peephole optimization of asynchronous macromodule networks
Journal ArticleMost high level synthesis tools for asynchronous circuits take descriptions in concurrent hardware description languages and generate networks of macromodules or handshake components. In this paper we describe a peephole optimizer for such macromodule networks that often effects area and/or time improvements. Our optimizer first deduces an equivalent black-box behavior for the given network of macrmodules using Dill's trace-theoretic parallel composition operator. It then applies a new procedure culled Burst-mode reduction to obtain burst-mode machines, which can be synthesized into gate networks using available tools. Since burst-mode reduction can be applied to any macromodule network that is delay-insensitive as well as deterministic, our optimizer covers a significant number of asynchronous circuits especially those generated by asynchronous high level synthesis tools
Peephole optimization of asynchronous macromodule networks
Journal ArticleAbstract- Most high-level synthesis tools for asynchronous circuits take descriptions in concurrent hardware description languages and generate networks of macromodules or handshake components. In this paper, we propose a peephole optimizer for these networks. Our peephole optimizer first deduces an equivalent blackbox behavior for the network using Dill's tracetheoretic parallel composition operator. It then applies a new procedure called burst-mode reduction to obtain burst-mode machines from the deduced behavior. In a significant number of examples, our optimizer achieves gate-count improvements by a factor of five, and speed (cycle-time) improvements by a factor of two. Burst-mode reduction can be applied to any macromodule network that is delay insensitive as well as deterministic. A significant number of asynchronous circuits, especially those generated by asynchronous high-level synthesis tools, fall into this class, thus making our procedure widely applicable
Highly Undecidable Problems For Infinite Computations
We show that many classical decision problems about 1-counter
omega-languages, context free omega-languages, or infinitary rational
relations, are -complete, hence located at the second level of the
analytical hierarchy, and "highly undecidable". In particular, the universality
problem, the inclusion problem, the equivalence problem, the determinizability
problem, the complementability problem, and the unambiguity problem are all
-complete for context-free omega-languages or for infinitary rational
relations. Topological and arithmetical properties of 1-counter
omega-languages, context free omega-languages, or infinitary rational
relations, are also highly undecidable. These very surprising results provide
the first examples of highly undecidable problems about the behaviour of very
simple finite machines like 1-counter automata or 2-tape automata.Comment: to appear in RAIRO-Theoretical Informatics and Application
- …