20,369 research outputs found
Further Results on the Power of Generating APCol Systems
In this paper we continue our investigations in APCol systems (Automatonlike
P colonies), variants of P colonies where the environment of the agents is given by a
string and the functioning of the system resembles to the functioning of standard nite
automaton. We rst deal with the concept of determinism in these systems and compare
deterministic APCol systems with deterministic register machines. Then we focus on
generating non-deterministic APCol systems with only one agent. We show that these
systems are as powerful as 0-type grammars, i.e., generate any recursively enumerable
language. If the APCol system is non-erasing, then any context-sensitive language can
be generated by a non-deterministic APCol systems with only one agent
Space-Efficient Re-Pair Compression
Re-Pair is an effective grammar-based compression scheme achieving strong
compression rates in practice. Let , , and be the text length,
alphabet size, and dictionary size of the final grammar, respectively. In their
original paper, the authors show how to compute the Re-Pair grammar in expected
linear time and words of working space on top
of the text. In this work, we propose two algorithms improving on the space of
their original solution. Our model assumes a memory word of bits and a re-writable input text composed by such words. Our
first algorithm runs in expected time and uses
words of space on top of the text for any parameter
chosen in advance. Our second algorithm runs in expected
time and improves the space to words
Ciliate Gene Unscrambling with Fewer Templates
One of the theoretical models proposed for the mechanism of gene unscrambling
in some species of ciliates is the template-guided recombination (TGR) system
by Prescott, Ehrenfeucht and Rozenberg which has been generalized by Daley and
McQuillan from a formal language theory perspective. In this paper, we propose
a refinement of this model that generates regular languages using the iterated
TGR system with a finite initial language and a finite set of templates, using
fewer templates and a smaller alphabet compared to that of the Daley-McQuillan
model. To achieve Turing completeness using only finite components, i.e., a
finite initial language and a finite set of templates, we also propose an
extension of the contextual template-guided recombination system (CTGR system)
by Daley and McQuillan, by adding an extra control called permitting contexts
on the usage of templates.Comment: In Proceedings DCFS 2010, arXiv:1008.127
Adding HL7 version 3 data types to PostgreSQL
The HL7 standard is widely used to exchange medical information
electronically. As a part of the standard, HL7 defines scalar communication
data types like physical quantity, point in time and concept descriptor but
also complex types such as interval types, collection types and probabilistic
types. Typical HL7 applications will store their communications in a database,
resulting in a translation from HL7 concepts and types into database types.
Since the data types were not designed to be implemented in a relational
database server, this transition is cumbersome and fraught with programmer
error. The purpose of this paper is two fold. First we analyze the HL7 version
3 data type definitions and define a number of conditions that must be met, for
the data type to be suitable for implementation in a relational database. As a
result of this analysis we describe a number of possible improvements in the
HL7 specification. Second we describe an implementation in the PostgreSQL
database server and show that the database server can effectively execute
scientific calculations with units of measure, supports a large number of
operations on time points and intervals, and can perform operations that are
akin to a medical terminology server. Experiments on synthetic data show that
the user defined types perform better than an implementation that uses only
standard data types from the database server.Comment: 12 pages, 9 figures, 6 table
Towards MKM in the Large: Modular Representation and Scalable Software Architecture
MKM has been defined as the quest for technologies to manage mathematical
knowledge. MKM "in the small" is well-studied, so the real problem is to scale
up to large, highly interconnected corpora: "MKM in the large". We contend that
advances in two areas are needed to reach this goal. We need representation
languages that support incremental processing of all primitive MKM operations,
and we need software architectures and implementations that implement these
operations scalably on large knowledge bases.
We present instances of both in this paper: the MMT framework for modular
theory-graphs that integrates meta-logical foundations, which forms the base of
the next OMDoc version; and TNTBase, a versioned storage system for XML-based
document formats. TNTBase becomes an MMT database by instantiating it with
special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical
Knowledge Management: MKM 201
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
We present a multi-modal dialogue system for interactive learning of
perceptually grounded word meanings from a human tutor. The system integrates
an incremental, semantic parsing/generation framework - Dynamic Syntax and Type
Theory with Records (DS-TTR) - with a set of visual classifiers that are
learned throughout the interaction and which ground the meaning representations
that it produces. We use this system in interaction with a simulated human
tutor to study the effects of different dialogue policies and capabilities on
the accuracy of learned meanings, learning rates, and efforts/costs to the
tutor. We show that the overall performance of the learning agent is affected
by (1) who takes initiative in the dialogues; (2) the ability to express/use
their confidence level about visual attributes; and (3) the ability to process
elliptical and incrementally constructed dialogue turns. Ultimately, we train
an adaptive dialogue policy which optimises the trade-off between classifier
accuracy and tutoring costs.Comment: 11 pages, SIGDIAL 2016 Conferenc
Specifying and Placing Chains of Virtual Network Functions
Network appliances perform different functions on network flows and
constitute an important part of an operator's network. Normally, a set of
chained network functions process network flows. Following the trend of
virtualization of networks, virtualization of the network functions has also
become a topic of interest. We define a model for formalizing the chaining of
network functions using a context-free language. We process deployment requests
and construct virtual network function graphs that can be mapped to the
network. We describe the mapping as a Mixed Integer Quadratically Constrained
Program (MIQCP) for finding the placement of the network functions and chaining
them together considering the limited network resources and requirements of the
functions. We have performed a Pareto set analysis to investigate the possible
trade-offs between different optimization objectives
On the Degree of Team Cooperation in CD Grammar Systems.
In this paper, we introduce a dynamical complexity measure, namely the degree of team cooperation, in the aim of investigating "how much" the components of a grammar system cooperate when forming a team in the process of generating terminal words. We present several results which strongly suggest that this measure is trivial in the sense that the degree of team cooperation of any language is bounded by a constant. Finally, we prove that the degree of team cooperation of a given cooperating/distributed grammar system cannot be algorithmically computed and discuss a decision problem
- …