1,605 research outputs found
Timed Soft Concurrent Constraint Programs: An Interleaved and a Parallel Approach
We propose a timed and soft extension of Concurrent Constraint Programming.
The time extension is based on the hypothesis of bounded asynchrony: the
computation takes a bounded period of time and is measured by a discrete global
clock. Action prefixing is then considered as the syntactic marker which
distinguishes a time instant from the next one. Supported by soft constraints
instead of crisp ones, tell and ask agents are now equipped with a preference
(or consistency) threshold which is used to determine their success or
suspension. In the paper we provide a language to describe the agents behavior,
together with its operational and denotational semantics, for which we also
prove the compositionality and correctness properties. After presenting a
semantics using maximal parallelism of actions, we also describe a version for
their interleaving on a single processor (with maximal parallelism for time
elapsing). Coordinating agents that need to take decisions both on preference
values and time events may benefit from this language. To appear in Theory and
Practice of Logic Programming (TPLP)
A Provenance Tracking Model for Data Updates
For data-centric systems, provenance tracking is particularly important when
the system is open and decentralised, such as the Web of Linked Data. In this
paper, a concise but expressive calculus which models data updates is
presented. The calculus is used to provide an operational semantics for a
system where data and updates interact concurrently. The operational semantics
of the calculus also tracks the provenance of data with respect to updates.
This provides a new formal semantics extending provenance diagrams which takes
into account the execution of processes in a concurrent setting. Moreover, a
sound and complete model for the calculus based on ideals of series-parallel
DAGs is provided. The notion of provenance introduced can be used as a
subjective indicator of the quality of data in concurrent interacting systems.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432
A Model of Cooperative Threads
We develop a model of concurrent imperative programming with threads. We
focus on a small imperative language with cooperative threads which execute
without interruption until they terminate or explicitly yield control. We
define and study a trace-based denotational semantics for this language; this
semantics is fully abstract but mathematically elementary. We also give an
equational theory for the computational effects that underlie the language,
including thread spawning. We then analyze threads in terms of the free algebra
monad for this theory.Comment: 39 pages, 5 figure
Distributed measurement-based quantum computation
We develop a formal model for distributed measurement-based quantum
computations, adopting an agent-based view, such that computations are
described locally where possible. Because the network quantum state is in
general entangled, we need to model it as a global structure, reminiscent of
global memory in classical agent systems. Local quantum computations are
described as measurement patterns. Since measurement-based quantum computation
is inherently distributed, this allows us to extend naturally several concepts
of the measurement calculus, a formal model for such computations. Our goal is
to define an assembly language, i.e. we assume that computations are
well-defined and we do not concern ourselves with verification techniques. The
operational semantics for systems of agents is given by a probabilistic
transition system, and we define operational equivalence in a way that it
corresponds to the notion of bisimilarity. With this in place, we prove that
teleportation is bisimilar to a direct quantum channel, and this also within
the context of larger networks.Comment: 17 page
Observational Equivalence and Full Abstraction in the Symmetric Interaction Combinators
The symmetric interaction combinators are an equally expressive variant of
Lafont's interaction combinators. They are a graph-rewriting model of
deterministic computation. We define two notions of observational equivalence
for them, analogous to normal form and head normal form equivalence in the
lambda-calculus. Then, we prove a full abstraction result for each of the two
equivalences. This is obtained by interpreting nets as certain subsets of the
Cantor space, called edifices, which play the same role as Boehm trees in the
theory of the lambda-calculus
Analyzing logic programs with dynamic scheduling
Traditional logic programming languages, such as Prolog, use a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, usually provide more flexible scheduling in which computation generally proceeds leftto- right but in which some calis are dynamically
"delayed" until their arguments are sufRciently instantiated
to allow the cali to run efficiently. Such dynamic scheduling has a significant cost. We give a framework for the global analysis of logic programming languages with dynamic scheduling and show that program analysis based on this framework supports optimizations which remove much
of the overhead of dynamic scheduling
Mechanized semantics
The goal of this lecture is to show how modern theorem provers---in this
case, the Coq proof assistant---can be used to mechanize the specification of
programming languages and their semantics, and to reason over individual
programs and over generic program transformations, as typically found in
compilers. The topics covered include: operational semantics (small-step,
big-step, definitional interpreters); a simple form of denotational semantics;
axiomatic semantics and Hoare logic; generation of verification conditions,
with application to program proof; compilation to virtual machine code and its
proof of correctness; an example of an optimizing program transformation (dead
code elimination) and its proof of correctness
Unifying type systems for mobile processes
We present a unifying framework for type systems for process calculi. The
core of the system provides an accurate correspondence between essentially
functional processes and linear logic proofs; fragments of this system
correspond to previously known connections between proofs and processes. We
show how the addition of extra logical axioms can widen the class of typeable
processes in exchange for the loss of some computational properties like
lock-freeness or termination, allowing us to see various well studied systems
(like i/o types, linearity, control) as instances of a general pattern. This
suggests unified methods for extending existing type systems with new features
while staying in a well structured environment and constitutes a step towards
the study of denotational semantics of processes using proof-theoretical
methods
- …