89,090 research outputs found
Facilitating modular property-preserving extensions of programming languages
We will explore an approach to modular programming language descriptions and extensions in a denotational style.
Based on a language core, language features are added stepwise on the core. Language features can be described
separated from each other in a self-contained, orthogonal way. We present an extension semantics framework consisting
of mechanisms to adapt semantics of a basic language to new structural requirements in an extended language
preserving the behaviour of programs of the basic language. Common templates of extension are provided. These
can be collected in extension libraries accessible to and extendible by language designers. Mechanisms to extend
these libraries are provided. A notation for describing language features embedding these semantics extensions is
presented
Regularity Preserving but not Reflecting Encodings
Encodings, that is, injective functions from words to words, have been
studied extensively in several settings. In computability theory the notion of
encoding is crucial for defining computability on arbitrary domains, as well as
for comparing the power of models of computation. In language theory much
attention has been devoted to regularity preserving functions.
A natural question arising in these contexts is: Is there a bijective
encoding such that its image function preserves regularity of languages, but
its pre-image function does not? Our main result answers this question in the
affirmative: For every countable class C of languages there exists a bijective
encoding f such that for every language L in C its image f[L] is regular.
Our construction of such encodings has several noteworthy consequences.
Firstly, anomalies arise when models of computation are compared with respect
to a known concept of implementation that is based on encodings which are not
required to be computable: Every countable decision model can be implemented,
in this sense, by finite-state automata, even via bijective encodings. Hence
deterministic finite-state automata would be equally powerful as Turing machine
deciders.
A second consequence concerns the recognizability of sets of natural numbers
via number representations and finite automata. A set of numbers is said to be
recognizable with respect to a representation if an automaton accepts the
language of representations. Our result entails that there is one number
representation with respect to which every recursive set is recognizable
Observation of implicit complexity by non confluence
We propose to consider non confluence with respect to implicit complexity. We
come back to some well known classes of first-order functional program, for
which we have a characterization of their intentional properties, namely the
class of cons-free programs, the class of programs with an interpretation, and
the class of programs with a quasi-interpretation together with a termination
proof by the product path ordering. They all correspond to PTIME. We prove that
adding non confluence to the rules leads to respectively PTIME, NPTIME and
PSPACE. Our thesis is that the separation of the classes is actually a witness
of the intentional properties of the initial classes of programs
Strategic polymorphism requires just two combinators!
In previous work, we introduced the notion of functional strategies:
first-class generic functions that can traverse terms of any type while mixing
uniform and type-specific behaviour. Functional strategies transpose the notion
of term rewriting strategies (with coverage of traversal) to the functional
programming paradigm. Meanwhile, a number of Haskell-based models and
combinator suites were proposed to support generic programming with functional
strategies.
In the present paper, we provide a compact and matured reconstruction of
functional strategies. We capture strategic polymorphism by just two primitive
combinators. This is done without commitment to a specific functional language.
We analyse the design space for implementational models of functional
strategies. For completeness, we also provide an operational reference model
for implementing functional strategies (in Haskell). We demonstrate the
generality of our approach by reconstructing representative fragments of the
Strafunski library for functional strategies.Comment: A preliminary version of this paper was presented at IFL 2002, and
included in the informal preproceedings of the worksho
A connection between concurrency and language theory
We show that three fixed point structures equipped with (sequential)
composition, a sum operation, and a fixed point operation share the same valid
equations. These are the theories of (context-free) languages, (regular) tree
languages, and simulation equivalence classes of (regular) synchronization
trees (or processes). The results reveal a close relationship between classical
language theory and process algebra
Modular Composition of Language Features through Extensions of Semantic Language Models
Today, programming or specification languages are often extended in order to customize them for a particular application domain or to refine the language definition. The extension of a semantic model is often at the centre of such an extension. We will present a framework for linking basic and extended models. The example which we are going to
use is the RSL concurrency model. The RAISE specification language RSL is a formal wide-spectrum specification
language which integrates different features, such as state-basedness, concurrency and modules. The concurrency
features of RSL are based on a refinement of a classical denotational model for process algebras. A modification was
necessary to integrate state-based features into the basic model in order to meet requirements in the design of RSL.
We will investigate this integration, formalising the relationship between the basic model and the adapted version in a rigorous way. The result will be a modular composition of the basic process model and new language features, such as state-based features or input/output. We will show general mechanisms for integration of new features into a language by extending language models in a structured, modular way. In particular, we will concentrate on the preservation of properties of the basic model in these extensions
Varieties of Languages in a Category
Eilenberg's variety theorem, a centerpiece of algebraic automata theory,
establishes a bijective correspondence between varieties of languages and
pseudovarieties of monoids. In the present paper this result is generalized to
an abstract pair of algebraic categories: we introduce varieties of languages
in a category C, and prove that they correspond to pseudovarieties of monoids
in a closed monoidal category D, provided that C and D are dual on the level of
finite objects. By suitable choices of these categories our result uniformly
covers Eilenberg's theorem and three variants due to Pin, Polak and Reutenauer,
respectively, and yields new Eilenberg-type correspondences
- ā¦