3,846 research outputs found
On variables with few occurrences in conjunctive normal forms
We consider the question of the existence of variables with few occurrences
in boolean conjunctive normal forms (clause-sets). Let mvd(F) for a clause-set
F denote the minimal variable-degree, the minimum of the number of occurrences
of variables. Our main result is an upper bound mvd(F) <= nM(surp(F)) <=
surp(F) + 1 + log_2(surp(F)) for lean clause-sets F in dependency on the
surplus surp(F).
- Lean clause-sets, defined as having no non-trivial autarkies, generalise
minimally unsatisfiable clause-sets.
- For the surplus we have surp(F) <= delta(F) = c(F) - n(F), using the
deficiency delta(F) of clause-sets, the difference between the number of
clauses and the number of variables.
- nM(k) is the k-th "non-Mersenne" number, skipping in the sequence of
natural numbers all numbers of the form 2^n - 1.
We conjecture that this bound is nearly precise for minimally unsatisfiable
clause-sets.
As an application of the upper bound we obtain that (arbitrary!) clause-sets
F with mvd(F) > nM(surp(F)) must have a non-trivial autarky (so clauses can be
removed satisfiability-equivalently by an assignment satisfying some clauses
and not touching the other clauses). It is open whether such an autarky can be
found in polynomial time.
As a future application we discuss the classification of minimally
unsatisfiable clause-sets depending on the deficiency.Comment: 14 pages. Revision contains more explanations, and more information
regarding the sharpness of the boun
Hardness measures and resolution lower bounds
Various "hardness" measures have been studied for resolution, providing
theoretical insight into the proof complexity of resolution and its fragments,
as well as explanations for the hardness of instances in SAT solving. In this
report we aim at a unified view of a number of hardness measures, including
different measures of width, space and size of resolution proofs. We also
extend these measures to all clause-sets (possibly satisfiable).Comment: 43 pages, preliminary version (yet the application part is only
sketched, with proofs missing
Trading inference effort versus size in CNF Knowledge Compilation
Knowledge Compilation (KC) studies compilation of boolean functions f into
some formalism F, which allows to answer all queries of a certain kind in
polynomial time. Due to its relevance for SAT solving, we concentrate on the
query type "clausal entailment" (CE), i.e., whether a clause C follows from f
or not, and we consider subclasses of CNF, i.e., clause-sets F with special
properties. In this report we do not allow auxiliary variables (except of the
Outlook), and thus F needs to be equivalent to f.
We consider the hierarchies UC_k <= WC_k, which were introduced by the
authors in 2012. Each level allows CE queries. The first two levels are
well-known classes for KC. Namely UC_0 = WC_0 is the same as PI as studied in
KC, that is, f is represented by the set of all prime implicates, while UC_1 =
WC_1 is the same as UC, the class of unit-refutation complete clause-sets
introduced by del Val 1994. We show that for each k there are (sequences of)
boolean functions with polysize representations in UC_{k+1}, but with an
exponential lower bound on representations in WC_k. Such a separation was
previously only know for k=0. We also consider PC < UC, the class of
propagation-complete clause-sets. We show that there are (sequences of) boolean
functions with polysize representations in UC, while there is an exponential
lower bound for representations in PC. These separations are steps towards a
general conjecture determining the representation power of the hierarchies PC_k
< UC_k <= WC_k. The strong form of this conjecture also allows auxiliary
variables, as discussed in depth in the Outlook.Comment: 43 pages, second version with literature updates. Proceeds with the
separation results from the discontinued arXiv:1302.442
Labor Contracts and the Taft-Hartley Act
The amount of information stored on the internet grows daily and naturally the requirements on the systems used to search for and analyse information increases. As a part in meeting the raised requirements this study investigates if it is possible for a automatised text analysis system to distinguish certain groups and categories of words in a text, and more specifically investigate if it is possible to distinguish words with a high information value from words with a low information value. This is important to enable optimizations of systems for global surveillance and information retrieval. The study is carried out using word spaces, which are often used in text analysis to model language. The distributional character of certain categories of words is examined by studying the intrinsic dimensionality of the space, locally around different words. Based on the result from the study of the intrinsic dimensionality, where there seems to be differences in the distributional character between categories of words, an algorithm is implemented for classifying words based on the dimensionality data. The classification algorithm is tested for different categories. The result strengthens the thesis that there could exist useful differences between the distributional character of different categories of words.I takt med att allt mer information finns tillgÀnglig pÄ internet vÀxer kraven som stÀlls pÄ system som anvÀnds för att söka efter och analysera information. I den hÀr rapporten undersöks huruvida det Àr möjligt för ett systemför automatiserad textanalys att avgöra vilka ord som Àr relevanta och informationsbÀrande i ett sammanhang. Detta Àr viktigt för att möjlig göra optimering och effektivisering av exempelvis informationssöknings- och omvÀrldsbevakningssystem. Undersökningen genomförs med hjÀlp av ordrumsmodeller för att modellera sprÄk. Den distributionella karaktÀren hos termerna undersöks genom att studeraden intrinsiska dimensionaliteten lokalt i rummet kring olika termer. Baserat pÄ resultaten av denna undersökning, som tycks visa pÄ att det fanns skillnader i den distributionella karaktÀren hos olika kategorier av ord, implementeras en algoritm för att klassificera ord baserat pÄ dimensionaliteten. Klassificeringsalgoritmen testas för olika kategorier. Resultatet stÀrker tesen om att det kan finnas vissa anvÀndbara skillnader mellan den distributionella karaktÀren hos olika kategorier av ord
Experimental evidence for mixed reality states
Recently researchers at the University of Illinois coupled a real pendulum to
its virtual counterpart. They observed that the two pendulums suddenly start to
move in synchrony if their lengths are sufficiently close. In this synchronized
state, the boundary between the real system and the virtual system is blurred,
that is, the pendulums are in a mixed reality state. An instantaneous,
bidirectional coupling is a prerequisite for mixed reality states. In this
article we explore the implications of mixed reality states in the context of
controlling real-world systems.Comment: 2 pages, 2 figure
On the van der Waerden numbers w(2;3,t)
We present results and conjectures on the van der Waerden numbers w(2;3,t)
and on the new palindromic van der Waerden numbers pdw(2;3,t). We have computed
the new number w(2;3,19) = 349, and we provide lower bounds for 20 <= t <= 39,
where for t <= 30 we conjecture these lower bounds to be exact. The lower
bounds for 24 <= t <= 30 refute the conjecture that w(2;3,t) <= t^2, and we
present an improved conjecture. We also investigate regularities in the good
partitions (certificates) to better understand the lower bounds.
Motivated by such reglarities, we introduce *palindromic van der Waerden
numbers* pdw(k; t_0,...,t_{k-1}), defined as ordinary van der Waerden numbers
w(k; t_0,...,t_{k-1}), however only allowing palindromic solutions (good
partitions), defined as reading the same from both ends. Different from the
situation for ordinary van der Waerden numbers, these "numbers" need actually
to be pairs of numbers. We compute pdw(2;3,t) for 3 <= t <= 27, and we provide
lower bounds, which we conjecture to be exact, for t <= 35.
All computations are based on SAT solving, and we discuss the various
relations between SAT solving and Ramsey theory. Especially we introduce a
novel (open-source) SAT solver, the tawSolver, which performs best on the SAT
instances studied here, and which is actually the original DLL-solver, but with
an efficient implementation and a modern heuristic typical for look-ahead
solvers (applying the theory developed in the SAT handbook article of the
second author).Comment: Second version 25 pages, updates of numerical data, improved
formulations, and extended discussions on SAT. Third version 42 pages, with
SAT solver data (especially for new SAT solver) and improved representation.
Fourth version 47 pages, with updates and added explanation
- âŠ