4,367 research outputs found
Changing a semantics: opportunism or courage?
The generalized models for higher-order logics introduced by Leon Henkin, and
their multiple offspring over the years, have become a standard tool in many
areas of logic. Even so, discussion has persisted about their technical status,
and perhaps even their conceptual legitimacy. This paper gives a systematic
view of generalized model techniques, discusses what they mean in mathematical
and philosophical terms, and presents a few technical themes and results about
their role in algebraic representation, calibrating provability, lowering
complexity, understanding fixed-point logics, and achieving set-theoretic
absoluteness. We also show how thinking about Henkin's approach to semantics of
logical systems in this generality can yield new results, dispelling the
impression of adhocness. This paper is dedicated to Leon Henkin, a deep
logician who has changed the way we all work, while also being an always open,
modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on
his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and
Alonso, E., 201
Program schemes with deep pushdown storage.
Inspired by recent work of Meduna on deep pushdown automata, we consider the computational power of a class of basic program schemes, TeX, based around assignments, while-loops and non- deterministic guessing but with access to a deep pushdown stack which, apart from having the usual push and pop instructions, also has deep-push instructions which allow elements to be pushed to stack locations deep within the stack. We syntactically define sub-classes of TeX by restricting the occurrences of pops, pushes and deep-pushes and capture the complexity classes NP and PSPACE. Furthermore, we show that all problems accepted by program schemes of TeX are in EXPTIME
Backward Reachability of Array-based Systems by SMT solving: Termination and Invariant Synthesis
The safety of infinite state systems can be checked by a backward
reachability procedure. For certain classes of systems, it is possible to prove
the termination of the procedure and hence conclude the decidability of the
safety problem. Although backward reachability is property-directed, it can
unnecessarily explore (large) portions of the state space of a system which are
not required to verify the safety property under consideration. To avoid this,
invariants can be used to dramatically prune the search space. Indeed, the
problem is to guess such appropriate invariants. In this paper, we present a
fully declarative and symbolic approach to the mechanization of backward
reachability of infinite state systems manipulating arrays by Satisfiability
Modulo Theories solving. Theories are used to specify the topology and the data
manipulated by the system. We identify sufficient conditions on the theories to
ensure the termination of backward reachability and we show the completeness of
a method for invariant synthesis (obtained as the dual of backward
reachability), again, under suitable hypotheses on the theories. We also
present a pragmatic approach to interleave invariant synthesis and backward
reachability so that a fix-point for the set of backward reachable states is
more easily obtained. Finally, we discuss heuristics that allow us to derive an
implementation of the techniques in the model checker MCMT, showing remarkable
speed-ups on a significant set of safety problems extracted from a variety of
sources.Comment: Accepted for publication in Logical Methods in Computer Scienc
A Computational Model for Quantum Measurement
Is the dynamical evolution of physical systems objectively a manifestation of
information processing by the universe? We find that an affirmative answer has
important consequences for the measurement problem. In particular, we calculate
the amount of quantum information processing involved in the evolution of
physical systems, assuming a finite degree of fine-graining of Hilbert space.
This assumption is shown to imply that there is a finite capacity to sustain
the immense entanglement that measurement entails. When this capacity is
overwhelmed, the system's unitary evolution becomes computationally unstable
and the system suffers an information transition (`collapse'). Classical
behaviour arises from the rapid cycles of unitary evolution and information
transitions.
Thus, the fine-graining of Hilbert space determines the location of the
`Heisenberg cut', the mesoscopic threshold separating the microscopic, quantum
system from the macroscopic, classical environment. The model can be viewed as
a probablistic complement to decoherence, that completes the measurement
process by turning decohered improper mixtures of states into proper mixtures.
It is shown to provide a natural resolution to the measurement problem and the
basis problem.Comment: 24 pages; REVTeX4; published versio
What's Decidable About Sequences?
We present a first-order theory of sequences with integer elements,
Presburger arithmetic, and regular constraints, which can model significant
properties of data structures such as arrays and lists. We give a decision
procedure for the quantifier-free fragment, based on an encoding into the
first-order theory of concatenation; the procedure has PSPACE complexity. The
quantifier-free fragment of the theory of sequences can express properties such
as sortedness and injectivity, as well as Boolean combinations of periodic and
arithmetic facts relating the elements of the sequence and their positions
(e.g., "for all even i's, the element at position i has value i+3 or 2i"). The
resulting expressive power is orthogonal to that of the most expressive
decidable logics for arrays. Some examples demonstrate that the fragment is
also suitable to reason about sequence-manipulating programs within the
standard framework of axiomatic semantics.Comment: Fixed a few lapses in the Mergesort exampl
A Logical Framework for Reputation Systems
Reputation systems are meta systems that record, aggregate and distribute information about the past behaviour of principals in an application. Typically, these applications are large-scale open distributed systems where principals are virtually anonymous, and (a priori) have no knowledge about the trustworthiness of each other. Reputation systems serve two primary purposes: helping principals decide whom to trust, and providing an incentive for principals to well-behave. A logical policy-based framework for reputation systems is presented. In the framework, principals specify policies which state precise requirements on the past behaviour of other principals that must be fulfilled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and efficient dynamic algorithms for checking whether a particular behaviour satisfies a property from the language. It is shown how the framework can be extended in several ways, most notably to encompass parameterized events and quantification over parameters. In an extended application, it is illustrated how the framework can be applied for dynamic history-based access control for safe execution of unknown and untrusted programs
- …