8,686 research outputs found
Combining and Relating Control Effects and their Semantics
Combining local exceptions and first class continuations leads to programs
with complex control flow, as well as the possibility of expressing powerful
constructs such as resumable exceptions. We describe and compare games models
for a programming language which includes these features, as well as
higher-order references. They are obtained by contrasting methodologies: by
annotating sequences of moves with "control pointers" indicating where
exceptions are thrown and caught, and by composing the exceptions and
continuations monads.
The former approach allows an explicit representation of control flow in
games for exceptions, and hence a straightforward proof of definability (full
abstraction) by factorization, as well as offering the possibility of a
semantic approach to control flow analysis of exception-handling. However,
establishing soundness of such a concrete and complex model is a non-trivial
problem. It may be resolved by establishing a correspondence with the monad
semantics, based on erasing explicit exception moves and replacing them with
control pointers.Comment: In Proceedings COS 2013, arXiv:1309.092
Nominal Game Semantics.
Tutorial notes presenting nominal game semantic
Chaotic Compilation for Encrypted Computing: Obfuscation but Not in Name
An `obfuscation' for encrypted computing is quantified exactly here, leading
to an argument that security against polynomial-time attacks has been achieved
for user data via the deliberately `chaotic' compilation required for security
properties in that environment. Encrypted computing is the emerging science and
technology of processors that take encrypted inputs to encrypted outputs via
encrypted intermediate values (at nearly conventional speeds). The aim is to
make user data in general-purpose computing secure against the operator and
operating system as potential adversaries. A stumbling block has always been
that memory addresses are data and good encryption means the encrypted value
varies randomly, and that makes hitting any target in memory problematic
without address decryption, yet decryption anywhere on the memory path would
open up many easily exploitable vulnerabilities. This paper `solves (chaotic)
compilation' for processors without address decryption, covering all of ANSI C
while satisfying the required security properties and opening up the field for
the standard software tool-chain and infrastructure. That produces the argument
referred to above, which may also hold without encryption.Comment: 31 pages. Version update adds "Chaotic" in title and throughout
paper, and recasts abstract and Intro and other sections of the text for
better access by cryptologists. To the same end it introduces the polynomial
time defense argument explicitly in the final section, having now set that
denouement out in the abstract and intr
Scalar and Vectorial mu-calculus with Atoms
We study an extension of modal -calculus to sets with atoms and we study
its basic properties. Model checking is decidable on orbit-finite structures,
and a correspondence to parity games holds. On the other hand, satisfiability
becomes undecidable. We also show expressive limitations of atom-enriched
-calculi, and explain how their expressive power depends on the structure
of atoms used, and on the choice between basic or vectorial syntax
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
Copyright for articles published in Logical Methods in Computer Science is retained by the authors. Logical Methods in Computer Science is an open-access journal. All journal content is licensed under a Creative Commons license (http://creativecommons.org/licenses/by-nd/2.0/)Published in Logical Methods in Computer Science
Vol. 5 (3:8) 2009, pp. 1–69
www.lmcs-online.orgResearch financially supported by the Engineering and Physical Sciences Research Council, the Eugenides
Foundation, the A. G. Leventis Foundation and Brasenose College
Program Equivalence with Names
The nu-calculus of Pitts and Stark was introduced as a paradigmatic
functional language with a very basic local-state effect: references of unit
type. These were called names, and the motto of the new language went as
follows:
"Names are created with local scope, can be tested for equality, and are
passed around via function application, but that is all."
Because of this limited framework, the hope was that fully abstract models
and complete proof techniques could be obtained. However, it was soon
realised that the behaviour of nu-calculus programs is quite intricate, and
program equivalence in particular is surprisingly difficult to capture. Here we
shall focus on the following "hard" equivalence.
new x,y in f. (fx=fy) == f. true
We shall examine attempts and proofs of the above, explain the advantages
and disadvantages of the proof methods and discuss why program
equivalence in this simple language remains to date a mystery
On the nature of the lexicon: the status of rich lexical meanings
The main goal of this paper is to show that there are many phenomena that pertain to the construction of truth-conditional compounds that follow characteristic patterns, and whose explanation requires appealing to knowledge structures organized in specific ways. We review a number of phenomena, ranging from non-homogenous modification and privative modification to polysemy and co-predication that indicate that knowledge structures do play a role in obtaining truth-conditions. After that, we show that several extant accounts that invoke rich lexical meanings to explain such phenomena face problems related to inflexibility and lack of predictive power. We review different ways in which one might react to such problems as regards lexical meanings: go richer, go moderately richer, go thinner, and go moderately thinner. On the face of it, it looks like moderate positions are unstable, given the apparent lack of a clear cutoff point between the semantic and the conceptual, but also that a very thin view and a very rich view may turn out to be indistinguishable in the long run. As far as we can see, the most pressing open questions concern this last issue: can there be a principled semantic/world knowledge distinction? Where could it be drawn: at some upper level (e.g. enriched qualia structures) or at some basic level (e.g. constraints)? How do parsimony considerations affect these two different approaches? A thin meanings approach postulates intermediate representations whose role is not clear in the interpretive process, while a rich meanings approach to lexical meaning seems to duplicate representations: the same representations that are stored in the lexicon would form part of conceptual representations. Both types of parsimony problems would be solved by assuming a direct relation between word forms and (parts of) conceptual or world knowledge, leading to a view that has been attributed to Chomsky (e.g. by Katz 1980) in which there is just syntax and encyclopedic knowledge
- …