9,575 research outputs found
Implementation methodology for using concurrent and collaborative approaches for theorem provers, with case studies of SAT and LCF style provers
Theorem provers are faced with the challenges of size and complexity, fueled by the increasing range
of applications. The use of concurrent/ distributed programming paradigms to engineer better theorem
provers merits serious investigation, as it provides: more processing power and opportunities for
implementing novel approaches to address theorem proving tasks hitherto infeasible in a sequential setting.
Investigation of these opportunities for two diverse theorem prover settings with an emphasis on
desirable implementation criteria is the core focus of this thesis.
Concurrent programming is notoriously error prone, hard to debug and evaluate. Thus, implementation
approaches which promote easy prototyping, portability, incremental development and effective isolation
of design and implementation can greatly aid the enterprise of experimentation with the application
of concurrent techniques to address specific theorem proving tasks. In this thesis, we have explored one
such approach by using Alice ML, a functional programming language with support for concurrency
and distribution, to implement the prototypes and have used programming abstractions to encapsulate
the implementations of the concurrent techniques used. The utility of this approach is illustrated via
proof-of-concept prototypes of concurrent systems for two diverse case studies of theorem proving: the
propositional satisfiability problem (SAT) and LCF style (first-order) theorem proving, addressing some
previously unexplored parallelisation opportunities for each, as follows:.
SAT: We have developed a novel hybrid approach for SAT and implemented a prototype for the same:
DPLL-Stalmarck. It uses two complementary algorithms for SAT, DPLL and Stalmarck’s. The two
solvers run asynchronously and dynamic information exchange is used for co-operative solving. Interaction
of the solvers has been encapsulated as a programming abstraction. Compared to the standalone
DPLL solver, DPLL-Stalmarck shows significant performance gains for two of the three problem classes
considered and comparable behaviour otherwise. As an exploratory research effort, we have developed a
novel algorithm, Concurrent Stalmarck, by applying concurrent techniques to the Stalmarck algorithm.
A proof-of-concept prototype for the same has been implemented. Implementation of the saturation
technique of the Stalmarck algorithm in a parallel setting, as implemented in Concurrent Stalmarck, has
been encapsulated as a programming abstraction.
LCF: Provision of programmable concurrent primitives enables customisation of concurrent techniques
to specific theorem proving scenarios. In this case study, we have developed a multilayered approach to
support programmable, sound extensions for an LCF prover: use programming abstractions to implement
the concurrent techniques; use these to develop novel tacticals (control structures to apply tactics),
incorporating concurrent techniques; and use these to develop novel proof search procedures. This
approach has been implemented in a prototypical LCF style first-order prover, using Alice ML. New
tacticals developed are: fastest-first; distributed composition; crossTalk: a novel tactic which uses dynamic,
collaborative information exchange to handle unification across multiple sub-goals, with shared
meta-variables; a new tactic, performing simultaneous proof-refutation attempts on propositional (sub-
)goals, by invoking an external SAT solver (SAT case study), as a counter-example finder. Examples of
concrete theorem proving scenarios are provided, demonstrating the utility of these extensions. Synthesis
of a variety of automatic proof search procedures has been demonstrated, illustrating the scope of
programmability and customisation, enabled by our multilayered approach
Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)
We consider the problem of verifying liveness for systems with a finite, but
unbounded, number of processes, commonly known as parameterised systems.
Typical examples of such systems include distributed protocols (e.g. for the
dining philosopher problem). Unlike the case of verifying safety, proving
liveness is still considered extremely challenging, especially in the presence
of randomness in the system. In this paper we consider liveness under arbitrary
(including unfair) schedulers, which is often considered a desirable property
in the literature of self-stabilising systems. We introduce an automatic method
of proving liveness for randomised parameterised systems under arbitrary
schedulers. Viewing liveness as a two-player reachability game (between
Scheduler and Process), our method is a CEGAR approach that synthesises a
progress relation for Process that can be symbolically represented as a
finite-state automaton. The method is incremental and exploits both
Angluin-style L*-learning and SAT-solvers. Our experiments show that our
algorithm is able to prove liveness automatically for well-known randomised
distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher
Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon
Protocol). To the best of our knowledge, this is the first fully-automatic
method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Parameterized Synthesis
We study the synthesis problem for distributed architectures with a
parametric number of finite-state components. Parameterized specifications
arise naturally in a synthesis setting, but thus far it was unclear how to
detect realizability and how to perform synthesis in a parameterized setting.
Using a classical result from verification, we show that for a class of
specifications in indexed LTL\X, parameterized synthesis in token ring networks
is equivalent to distributed synthesis in a network consisting of a few copies
of a single process. Adapting a well-known result from distributed synthesis,
we show that the latter problem is undecidable. We describe a semi-decision
procedure for the parameterized synthesis problem in token rings, based on
bounded synthesis. We extend the approach to parameterized synthesis in
token-passing networks with arbitrary topologies, and show applicability on a
simple case study. Finally, we sketch a general framework for parameterized
synthesis based on cutoffs and other parameterized verification techniques.Comment: Extended version of TACAS 2012 paper, 29 page
Learning to Prove Safety over Parameterised Concurrent Systems (Full Version)
We revisit the classic problem of proving safety over parameterised
concurrent systems, i.e., an infinite family of finite-state concurrent systems
that are represented by some finite (symbolic) means. An example of such an
infinite family is a dining philosopher protocol with any number n of processes
(n being the parameter that defines the infinite family). Regular model
checking is a well-known generic framework for modelling parameterised
concurrent systems, where an infinite set of configurations (resp. transitions)
is represented by a regular set (resp. regular transducer). Although verifying
safety properties in the regular model checking framework is undecidable in
general, many sophisticated semi-algorithms have been developed in the past
fifteen years that can successfully prove safety in many practical instances.
In this paper, we propose a simple solution to synthesise regular inductive
invariants that makes use of Angluin's classic L* algorithm (and its variants).
We provide a termination guarantee when the set of configurations reachable
from a given set of initial configurations is regular. We have tested L*
algorithm on standard (as well as new) examples in regular model checking
including the dining philosopher protocol, the dining cryptographer protocol,
and several mutual exclusion protocols (e.g. Bakery, Burns, Szymanski, and
German). Our experiments show that, despite the simplicity of our solution, it
can perform at least as well as existing semi-algorithms.Comment: Full version of FMCAD'17 pape
A Complexity-Based Hierarchy for Multiprocessor Synchronization
For many years, Herlihy's elegant computability based Consensus Hierarchy has
been our best explanation of the relative power of various types of
multiprocessor synchronization objects when used in deterministic algorithms.
However, key to this hierarchy is treating synchronization instructions as
distinct objects, an approach that is far from the real-world, where
multiprocessor programs apply synchronization instructions to collections of
arbitrary memory locations. We were surprised to realize that, when considering
instructions applied to memory locations, the computability based hierarchy
collapses. This leaves open the question of how to better capture the power of
various synchronization instructions.
In this paper, we provide an approach to answering this question. We present
a hierarchy of synchronization instructions, classified by their space
complexity in solving obstruction-free consensus. Our hierarchy provides a
classification of combinations of known instructions that seems to fit with our
intuition of how useful some are in practice, while questioning the
effectiveness of others. We prove an essentially tight characterization of the
power of buffered read and write instructions.Interestingly, we show a similar
result for multi-location atomic assignments
Approximate Convex Optimization by Online Game Playing
Lagrangian relaxation and approximate optimization algorithms have received
much attention in the last two decades. Typically, the running time of these
methods to obtain a approximate solution is proportional to
. Recently, Bienstock and Iyengar, following Nesterov,
gave an algorithm for fractional packing linear programs which runs in
iterations. The latter algorithm requires to solve a
convex quadratic program every iteration - an optimization subroutine which
dominates the theoretical running time.
We give an algorithm for convex programs with strictly convex constraints
which runs in time proportional to . The algorithm does NOT
require to solve any quadratic program, but uses gradient steps and elementary
operations only. Problems which have strictly convex constraints include
maximum entropy frequency estimation, portfolio optimization with loss risk
constraints, and various computational problems in signal processing.
As a side product, we also obtain a simpler version of Bienstock and
Iyengar's result for general linear programming, with similar running time.
We derive these algorithms using a new framework for deriving convex
optimization algorithms from online game playing algorithms, which may be of
independent interest
Artificial Science – a simulation test-bed for studying the social processes of science
it is likely that there are many different social processes occurring in different parts of science and at different times, and that these processes will impact upon the nature, quality and quantity of the knowledge that is produced in a multitude of ways and to different extents. It seems clear to me that sometimes the social processes act to increase the reliability of knowledge (such as when there is a tradition of independently reproducing experiments) but sometimes does the opposite (when a closed clique act to perpetuate itself by reducing opportunity for criticism). Simulation can perform a valuable role here by providing and refining possible linkages between the kinds of social processes and its results in terms of knowledge. Earlier simulations of this sort include Gilbert et al. in [10]. The simulation described herein aims to progress this work with a more structural and descriptive approach, that relates what is done by individuals and journals and what collectively results in terms of the overall process
- …