830 research outputs found
Understanding Space in Proof Complexity: Separations and Trade-offs via Substitutions
For current state-of-the-art DPLL SAT-solvers the two main bottlenecks are
the amounts of time and memory used. In proof complexity, these resources
correspond to the length and space of resolution proofs. There has been a long
line of research investigating these proof complexity measures, but while
strong results have been established for length, our understanding of space and
how it relates to length has remained quite poor. In particular, the question
whether resolution proofs can be optimized for length and space simultaneously,
or whether there are trade-offs between these two measures, has remained
essentially open.
In this paper, we remedy this situation by proving a host of length-space
trade-off results for resolution. Our collection of trade-offs cover almost the
whole range of values for the space complexity of formulas, and most of the
trade-offs are superpolynomial or even exponential and essentially tight. Using
similar techniques, we show that these trade-offs in fact extend to the
exponentially stronger k-DNF resolution proof systems, which operate with
formulas in disjunctive normal form with terms of bounded arity k. We also
answer the open question whether the k-DNF resolution systems form a strict
hierarchy with respect to space in the affirmative.
Our key technical contribution is the following, somewhat surprising,
theorem: Any CNF formula F can be transformed by simple variable substitution
into a new formula F' such that if F has the right properties, F' can be proven
in essentially the same length as F, whereas on the other hand the minimal
number of lines one needs to keep in memory simultaneously in any proof of F'
is lower-bounded by the minimal number of variables needed simultaneously in
any proof of F. Applying this theorem to so-called pebbling formulas defined in
terms of pebble games on directed acyclic graphs, we obtain our results.Comment: This paper is a merged and updated version of the two ECCC technical
reports TR09-034 and TR09-047, and it hence subsumes these two report
Trading inference effort versus size in CNF Knowledge Compilation
Knowledge Compilation (KC) studies compilation of boolean functions f into
some formalism F, which allows to answer all queries of a certain kind in
polynomial time. Due to its relevance for SAT solving, we concentrate on the
query type "clausal entailment" (CE), i.e., whether a clause C follows from f
or not, and we consider subclasses of CNF, i.e., clause-sets F with special
properties. In this report we do not allow auxiliary variables (except of the
Outlook), and thus F needs to be equivalent to f.
We consider the hierarchies UC_k <= WC_k, which were introduced by the
authors in 2012. Each level allows CE queries. The first two levels are
well-known classes for KC. Namely UC_0 = WC_0 is the same as PI as studied in
KC, that is, f is represented by the set of all prime implicates, while UC_1 =
WC_1 is the same as UC, the class of unit-refutation complete clause-sets
introduced by del Val 1994. We show that for each k there are (sequences of)
boolean functions with polysize representations in UC_{k+1}, but with an
exponential lower bound on representations in WC_k. Such a separation was
previously only know for k=0. We also consider PC < UC, the class of
propagation-complete clause-sets. We show that there are (sequences of) boolean
functions with polysize representations in UC, while there is an exponential
lower bound for representations in PC. These separations are steps towards a
general conjecture determining the representation power of the hierarchies PC_k
< UC_k <= WC_k. The strong form of this conjecture also allows auxiliary
variables, as discussed in depth in the Outlook.Comment: 43 pages, second version with literature updates. Proceeds with the
separation results from the discontinued arXiv:1302.442
Minimization for Generalized Boolean Formulas
The minimization problem for propositional formulas is an important
optimization problem in the second level of the polynomial hierarchy. In
general, the problem is Sigma-2-complete under Turing reductions, but
restricted versions are tractable. We study the complexity of minimization for
formulas in two established frameworks for restricted propositional logic: The
Post framework allowing arbitrarily nested formulas over a set of Boolean
connectors, and the constraint setting, allowing generalizations of CNF
formulas. In the Post case, we obtain a dichotomy result: Minimization is
solvable in polynomial time or coNP-hard. This result also applies to Boolean
circuits. For CNF formulas, we obtain new minimization algorithms for a large
class of formulas, and give strong evidence that we have covered all
polynomial-time cases
Rule-based Machine Learning Methods for Functional Prediction
We describe a machine learning method for predicting the value of a
real-valued function, given the values of multiple input variables. The method
induces solutions from samples in the form of ordered disjunctive normal form
(DNF) decision rules. A central objective of the method and representation is
the induction of compact, easily interpretable solutions. This rule-based
decision model can be extended to search efficiently for similar cases prior to
approximating function values. Experimental results on real-world data
demonstrate that the new techniques are competitive with existing machine
learning and statistical methods and can sometimes yield superior regression
performance.Comment: See http://www.jair.org/ for any accompanying file
ZETA - Zero-Trust Authentication: Relying on Innate Human Ability, not Technology
Reliable authentication requires the devices and
channels involved in the process to be trustworthy; otherwise
authentication secrets can easily be compromised. Given the
unceasing efforts of attackers worldwide such trustworthiness
is increasingly not a given. A variety of technical solutions,
such as utilising multiple devices/channels and verification
protocols, has the potential to mitigate the threat of untrusted
communications to a certain extent. Yet such technical solutions
make two assumptions: (1) users have access to multiple
devices and (2) attackers will not resort to hacking the human,
using social engineering techniques. In this paper, we propose
and explore the potential of using human-based computation
instead of solely technical solutions to mitigate the threat of
untrusted devices and channels. ZeTA (Zero Trust Authentication
on untrusted channels) has the potential to allow people to
authenticate despite compromised channels or communications
and easily observed usage. Our contributions are threefold:
(1) We propose the ZeTA protocol with a formal definition
and security analysis that utilises semantics and human-based
computation to ameliorate the problem of untrusted devices
and channels. (2) We outline a security analysis to assess
the envisaged performance of the proposed authentication
protocol. (3) We report on a usability study that explores the
viability of relying on human computation in this context
A logic-based analysis of Dempster-Shafer theory
AbstractDempster-Shafer (DS) theory is formulated in terms of propositional logic, using the implicit notion of provability underlying DS theory. Dempster-Shafer theory can be modeled in terms of propositional logic by the tuple (Σ, ϱ), where Σ is a set of propositional clauses and ϱ is an assignment of mass to each clause Σi ϵ Σ. It is shown that the disjunction of minimal support clauses for a clause Σi with respect to a set Σ of propositional clauses, ξ(Σi, Σ), when represented in terms of symbols for the ϱi 's, corresponds to a symbolic representation of the Dempster-Shafer belief function for δi. The combination of Belief functions using Dempster's rule of combination corresponds to a combination of the corresponding support clauses. The disjointness of the Boolean formulas representing DS Belief functions is shown to be necessary. Methods of computing disjoint formulas using network reliability techniques are discussed.In addition, the computational complexity of deriving DS Belief functions, including that of the logic-based methods which are the focus of this paper, is explored. Because of intractability even for moderately sized problem instances, efficient approximation methods are proposed for such computations. Finally, implementations of DS theory based on domain restrictions of DS theory, hypertree embeddings, and the ATMS, are examined
Representational information: a new general notion and measure\ud of information
In what follows, we introduce the notion of representational information (information conveyed by sets of dimensionally defined objects about their superset of origin) as well as an\ud
original deterministic mathematical framework for its analysis and measurement. The framework, based in part on categorical invariance theory [Vigo, 2009], unifies three key constructsof universal science – invariance, complexity, and information. From this unification we define the amount of information that a well-defined set of objects R carries about its finite superset of origin S, as the rate of change in the structural complexity of S (as determined by its degree of categorical invariance), whenever the objects in R are removed from the set S. The measure captures deterministically the significant role that context and category structure play in determining the relative quantity and quality of subjective information conveyed by particular objects in multi-object stimuli
- …