126 research outputs found
Restarting Automata with Auxiliary Symbols and Small Lookahead
We present a study on lookahead hierarchies for restarting automata with
auxiliary symbols and small lookahead. In particular, we show that there are
just two different classes of languages recognised RRWW automata, through the
restriction of lookahead size. We also show that the respective (left-)
monotone restarting automaton models characterise the context-free languages
and that the respective right-left-monotone restarting automata characterise
the linear languages both with just lookahead length 2.Comment: Full version of the paper accepted to LATA 201
On determinism versus nondeterminism for restarting automata
AbstractA restarting automaton processes a given word by executing a sequence of local simplifications until a simple word is obtained that the automaton then accepts. Such a computation is expressed as a sequence of cycles. A nondeterministic restarting automaton M is called correctness preserving, if, for each cycle u⊢Mcv, the string v belongs to the characteristic language LC(M) of M, if the string u does. Our first result states that for each type of restarting automaton X∈{R,RW,RWW,RL,RLW,RLWW}, if M is a nondeterministic X-automaton that is correctness preserving, then there exists a deterministic X-automaton M1 such that the characteristic languages LC(M1) and LC(M) coincide. When a restarting automaton M executes a cycle that transforms a string from the language LC(M) into a string not belonging to LC(M), then this can be interpreted as an error of M. By counting the number of cycles it may take M to detect this error, we obtain a measure for the influence that errors have on computations. Accordingly, this measure is called error detection distance. It turns out, however, that an X-automaton with bounded error detection distance is equivalent to a correctness preserving X-automaton, and therewith to a deterministic X-automaton. This means that nondeterminism increases the expressive power of X-automata only in combination with an unbounded error detection distance
Random growth models with polygonal shapes
We consider discrete-time random perturbations of monotone cellular automata
(CA) in two dimensions. Under general conditions, we prove the existence of
half-space velocities, and then establish the validity of the Wulff
construction for asymptotic shapes arising from finite initial seeds. Such a
shape converges to the polygonal invariant shape of the corresponding
deterministic model as the perturbation decreases. In many cases, exact
stability is observed. That is, for small perturbations, the shapes of the
deterministic and random processes agree exactly. We give a complete
characterization of such cases, and show that they are prevalent among
threshold growth CA with box neighborhood. We also design a nontrivial family
of CA in which the shape is exactly computable for all values of its
probability parameter.Comment: Published at http://dx.doi.org/10.1214/009117905000000512 in the
Annals of Probability (http://www.imstat.org/aop/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Clearing Restarting Automata
Restartovací automaty byly navrženy jako model pro redukční analýzu, která představuje lingvisticky motivovanou metodu pro kontrolu korektnosti věty. Cílem práce je studovat omezenější modely restartovacích automatů, které smí vymazat podřetězec nebo jej nahradit speciálním pomocným symbolem, jenom na základě omezeného lokálního kontextu tohoto podřetězce. Tyto restartovací automaty se nazývají clearing restarting automata. V práci jsou taktéž zkoumány uzávěrové vlastnosti těchto automatů, jejich vztah k Chomskeho hierarchii a možnosti učení těchto automatů na základě pozitivních a negativních příkladů.Restarting automata were introduced as a model for analysis by reduction which is a linguistically motivated method for checking correctness of a sentence. The goal of the thesis is to study more restricted models of restarting automata which based on a limited context can either delete a substring of the current content of its tape or replace a substring by a special symbol, which cannot be overwritten anymore, but it can be deleted later. Such restarting automata are called clearing restarting automata. The thesis investigates the properties of clearing restarting automata, their relation to Chomsky hierarchy and possibilities for machine learning of such automata from positive and negative samples.Department of Software and Computer Science EducationKatedra softwaru a výuky informatikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult
Sampling Correctors
In many situations, sample data is obtained from a noisy or imperfect source.
In order to address such corruptions, this paper introduces the concept of a
sampling corrector. Such algorithms use structure that the distribution is
purported to have, in order to allow one to make "on-the-fly" corrections to
samples drawn from probability distributions. These algorithms then act as
filters between the noisy data and the end user.
We show connections between sampling correctors, distribution learning
algorithms, and distribution property testing algorithms. We show that these
connections can be utilized to expand the applicability of known distribution
learning and property testing algorithms as well as to achieve improved
algorithms for those tasks.
As a first step, we show how to design sampling correctors using proper
learning algorithms. We then focus on the question of whether algorithms for
sampling correctors can be more efficient in terms of sample complexity than
learning algorithms for the analogous families of distributions. When
correcting monotonicity, we show that this is indeed the case when also granted
query access to the cumulative distribution function. We also obtain sampling
correctors for monotonicity without this stronger type of access, provided that
the distribution be originally very close to monotone (namely, at a distance
). In addition to that, we consider a restricted error model
that aims at capturing "missing data" corruptions. In this model, we show that
distributions that are close to monotone have sampling correctors that are
significantly more efficient than achievable by the learning approach.
We also consider the question of whether an additional source of independent
random bits is required by sampling correctors to implement the correction
process
A Hybrid Analysis for Security Protocols with State
Cryptographic protocols rely on message-passing to coordinate activity among
principals. Each principal maintains local state in individual local sessions
only as needed to complete that session. However, in some protocols a principal
also uses state to coordinate its different local sessions. Sometimes the
non-local, mutable state is used as a means, for example with smart cards or
Trusted Platform Modules. Sometimes it is the purpose of running the protocol,
for example in commercial transactions.
Many richly developed tools and techniques, based on well-understood
foundations, are available for design and analysis of pure message-passing
protocols. But the presence of cross-session state poses difficulties for these
techniques.
In this paper we provide a framework for modeling stateful protocols. We
define a hybrid analysis method. It leverages theorem-proving---in this
instance, the PVS prover---for reasoning about computations over state. It
combines that with an "enrich-by-need" approach---embodied by CPSA---that
focuses on the message-passing part. As a case study we give a full analysis of
the Envelope Protocol, due to Mark Ryan
LIPIcs
Fault-tolerant distributed algorithms play an important role in many critical/high-availability applications. These algorithms are notoriously difficult to implement correctly, due to asynchronous communication and the occurrence of faults, such as the network dropping messages or computers crashing. Nonetheless there is surprisingly little language and verification support to build distributed systems based on fault-tolerant algorithms. In this paper, we present some of the challenges that a designer has to overcome to implement a fault-tolerant distributed system. Then we review different models that have been proposed to reason about distributed algorithms and sketch how such a model can form the basis for a domain-specific programming language. Adopting a high-level programming model can simplify the programmer's life and make the code amenable to automated verification, while still compiling to efficiently executable code. We conclude by summarizing the current status of an ongoing language design and implementation project that is based on this idea
- …