636 research outputs found
Language generating alphabetic flat splicing P systems
An operation on strings, called at splicing was introduced, inspired by a splicing operation on circular strings considered in the study of modelling of the recombinant behaviour of DNA molecules. A simple kind of at splicing, called alphabetic at splicing, allows insertion of a word with a specified start symbol and/or a specified end symbol, between two pre-determined symbols in a given
word. In this work, we consider a P system with only alphabetic at splicing rules as the evolution rules and strings of symbols as objects in its regions. We examine the language generative power of the resulting alphabetic at splicing P systems (AFS P systems, for short). In particular, we show that AFS P systems with two membranes are more powerful in generative power than AFS P systems with a single membrane. We also construct AFS P systems with at most three membranes to generate languages that do not belong to certain other language classes and show an application to generation of chain code pictures
A Bio-inspired Model of Picture Array Generating P System with Restricted Insertion Rules
In the bio-inspired area of membrane computing, a novel computing model with a generic name of P system was introduced around the year 2000. Among its several variants, string or array language generating P systems involving rewriting rules have been considered. A new picture array model of array generating system with a restricted type of picture insertion rules and picture array objects in its regions, is introduced here. The generative power of such a system is investigated by comparing with the generative power of certain related picture array grammar models introduced and studied in two-dimensional picture language theory. It is shown that this new model of array P system can generate picture array languages which cannot be generated by many other array grammar models. The theoretical model developed is for handling the application problem of generation of patterns encoded as picture arrays over a finite set of symbols. As an application, certain floor-design patterns are generated using such an array system
Stream Fusion, to Completeness
Stream processing is mainstream (again): Widely-used stream libraries are now
available for virtually all modern OO and functional languages, from Java to C#
to Scala to OCaml to Haskell. Yet expressivity and performance are still
lacking. For instance, the popular, well-optimized Java 8 streams do not
support the zip operator and are still an order of magnitude slower than
hand-written loops. We present the first approach that represents the full
generality of stream processing and eliminates overheads, via the use of
staging. It is based on an unusually rich semantic model of stream interaction.
We support any combination of zipping, nesting (or flat-mapping), sub-ranging,
filtering, mapping-of finite or infinite streams. Our model captures
idiosyncrasies that a programmer uses in optimizing stream pipelines, such as
rate differences and the choice of a "for" vs. "while" loops. Our approach
delivers hand-written-like code, but automatically. It explicitly avoids the
reliance on black-box optimizers and sufficiently-smart compilers, offering
highest, guaranteed and portable performance. Our approach relies on high-level
concepts that are then readily mapped into an implementation. Accordingly, we
have two distinct implementations: an OCaml stream library, staged via
MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we
derive libraries richer and simultaneously many tens of times faster than past
work. We greatly exceed in performance the standard stream libraries available
in Java, Scala and OCaml, including the well-optimized Java 8 streams
Effective Quotation: Relating Approaches to Language-integrated Query
Language-integrated query techniques have been explored in a number of
different language designs. We consider two different, type-safe approaches
employed by Links and F#. Both approaches provide rich dynamic query generation
capabilities, and thus amount to a form of heterogeneous staged computation,
but to date there has been no formal investigation of their relative
expressiveness. We present two core calculi Eff and Quot, respectively
capturing the essential aspects of language-integrated querying using effects
in Links and quotation in LINQ. We show via translations from Eff to Quot and
back that the two approaches are equivalent in expressiveness. Based on the
translation from Eff to Quot, we extend a simple Links compiler to handle
queries.Comment: Proceedings of the ACM SIGPLAN 2014 Workshop on Partial Evaluation
and Program Manipulation, January 20-21, 2014, San Diego, CA, USA. Copyright
is held by the owner/author(s). Publication rights licensed to AC
Staged Compilation with Two-Level Type Theory
The aim of staged compilation is to enable metaprogramming in a way such that
we have guarantees about the well-formedness of code output, and we can also
mix together object-level and meta-level code in a concise and convenient
manner. In this work, we observe that two-level type theory (2LTT), a system
originally devised for the purpose of developing synthetic homotopy theory,
also serves as a system for staged compilation with dependent types. 2LTT has
numerous good properties for this use case: it has a concise specification,
well-behaved model theory, and it supports a wide range of language features
both at the object and the meta level. First, we give an overview of 2LTT's
features and applications in staging. Then, we present a staging algorithm and
prove its correctness. Our algorithm is "staging-by-evaluation", analogously to
the technique of normalization-by-evaluation, in that staging is given by the
evaluation of 2LTT syntax in a semantic domain. The staging algorithm together
with its correctness constitutes a proof of strong conservativity of 2LLT over
the object theory. To our knowledge, this is the first description of staged
compilation which supports full dependent types and unrestricted staging for
types
Evolution from the ground up with Amee – From basic concepts to explorative modeling
Evolutionary theory has been the foundation of biological research for about a century
now, yet over the past few decades, new discoveries and theoretical advances have rapidly
transformed our understanding of the evolutionary process. Foremost among them are
evolutionary developmental biology, epigenetic inheritance, and various forms of evolu-
tionarily relevant phenotypic plasticity, as well as cultural evolution, which ultimately led
to the conceptualization of an extended evolutionary synthesis. Starting from abstract
principles rooted in complexity theory, this thesis aims to provide a unified conceptual
understanding of any kind of evolution, biological or otherwise. This is used in the second
part to develop Amee, an agent-based model that unifies development, niche construction,
and phenotypic plasticity with natural selection based on a simulated ecology. Amee
is implemented in Utopia, which allows performant, integrated implementation and
simulation of arbitrary agent-based models. A phenomenological overview over Amee’s
capabilities is provided, ranging from the evolution of ecospecies down to the evolution
of metabolic networks and up to beyond-species-level biological organization, all of
which emerges autonomously from the basic dynamics. The interaction of development,
plasticity, and niche construction has been investigated, and it has been shown that while
expected natural phenomena can, in principle, arise, the accessible simulation time and
system size are too small to produce natural evo-devo phenomena and –structures. Amee thus can be used to simulate the evolution of a wide variety of processes
Higher-Order, Data-Parallel Structured Deduction
State-of-the-art Datalog engines include expressive features such as ADTs
(structured heap values), stratified aggregation and negation, various
primitive operations, and the opportunity for further extension using FFIs.
Current parallelization approaches for state-of-art Datalogs target
shared-memory locking data-structures using conventional multi-threading, or
use the map-reduce model for distributed computing. Furthermore, current
state-of-art approaches cannot scale to formal systems which pervasively
manipulate structured data due to their lack of indexing for structured data
stored in the heap.
In this paper, we describe a new approach to data-parallel structured
deduction that involves a key semantic extension of Datalog to permit
first-class facts and higher-order relations via defunctionalization, an
implementation approach that enables parallelism uniformly both across sets of
disjoint facts and over individual facts with nested structure. We detail a
core language, , whose key invariant (subfact closure) ensures that each
subfact is materialized as a top-class fact. We extend to Slog, a
fully-featured language whose forms facilitate leveraging subfact closure to
rapidly implement expressive, high-performance formal systems. We demonstrate
Slog by building a family of control-flow analyses from abstract machines,
systematically, along with several implementations of classical type systems
(such as STLC and LF). We performed experiments on EC2, Azure, and ALCF's Theta
at up to 1000 threads, showing orders-of-magnitude scalability improvements
versus competing state-of-art systems
- …