1,479 research outputs found
Automating Resolution is NP-Hard
We show that the problem of finding a Resolution refutation that is at most
polynomially longer than a shortest one is NP-hard. In the parlance of proof
complexity, Resolution is not automatizable unless P = NP. Indeed, we show it
is NP-hard to distinguish between formulas that have Resolution refutations of
polynomial length and those that do not have subexponential length refutations.
This also implies that Resolution is not automatizable in subexponential time
or quasi-polynomial time unless NP is included in SUBEXP or QP, respectively
Complexity in Automation of SOS Proofs: An Illustrative Example
We present a case study in proving invariance
for a chaotic dynamical system, the logistic map, based on
Positivstellensatz refutations, with the aim of studying the
problems associated with developing a completely automated
proof system. We derive the refutation using two different forms
of the Positivstellensatz and compare the results to illustrate the
challenges in defining and classifying the ‘complexity’ of such
a proof. The results show the flexibility of the SOS framework
in converting a dynamics problem into a semialgebraic one as
well as in choosing the form of the proof. Yet it is this very
flexibility that complicates the process of automating the proof
system and classifying proof ‘complexity.
Automating Resolution is NP-hard
We show that the problem of finding a Resolution refutation that is at most polynomially longer than a shortest one is NP-hard. In the parlance of proof complexity, Resolution is not automatizable unless P = NP. Indeed, we show that it is NP-hard to distinguish between formulas that have Resolution refutations of polynomial length and those that do not have subexponential length refutations. This also implies that Resolution is not automatizable in subexponential time or quasi-polynomial time unless~NP is included in SUBEXP or QP, respectively.Peer ReviewedPostprint (author's final draft
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Trace Equivalence Decision: Negative Tests and Non-determinism
We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy-type properties, like anonymity, vote-privacy, and unlinkability.
In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions)
The Power of Negative Reasoning
Semialgebraic proof systems have been studied extensively in proof complexity since the late 1990s to understand the power of Gröbner basis computations, linear and semidefinite programming hierarchies, and other methods. Such proof systems are defined alternately with only the original variables of the problem and with special formal variables for positive and negative literals, but there seems to have been no study how these different definitions affect the power of the proof systems. We show for Nullstellensatz, polynomial calculus, Sherali-Adams, and sums-of-squares that adding formal variables for negative literals makes the proof systems exponentially stronger, with respect to the number of terms in the proofs. These separations are witnessed by CNF formulas that are easy for resolution, which establishes that polynomial calculus, Sherali-Adams, and sums-of-squares cannot efficiently simulate resolution without having access to variables for negative literals
A bibliography on formal methods for system specification, design and validation
Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed
Algebraic classifications for fragments of first-order logic and beyond
Complexity and decidability of logics is a major research area involving a
huge range of different logical systems. This calls for a unified and
systematic approach for the field. We introduce a research program based on an
algebraic approach to complexity classifications of fragments of first-order
logic (FO) and beyond. Our base system GRA, or general relation algebra, is
equiexpressive with FO. It resembles cylindric algebra but employs a finite
signature with only seven different operators. We provide a comprehensive
classification of the decidability and complexity of the systems obtained by
limiting the allowed sets of operators. We also give algebraic
characterizations of the best known decidable fragments of FO. Furthermore, to
move beyond FO, we introduce the notion of a generalized operator and briefly
study related systems.Comment: Significantly updates the first version. The principal set of
operations change
- …