22 research outputs found
The Potential of Restarts for ProbSAT
This work analyses the potential of restarts for probSAT, a quite successful
algorithm for k-SAT, by estimating its runtime distributions on random 3-SAT
instances that are close to the phase transition. We estimate an optimal
restart time from empirical data, reaching a potential speedup factor of 1.39.
Calculating restart times from fitted probability distributions reduces this
factor to a maximum of 1.30. A spin-off result is that the Weibull distribution
approximates the runtime distribution for over 93% of the used instances well.
A machine learning pipeline is presented to compute a restart time for a
fixed-cutoff strategy to exploit this potential. The main components of the
pipeline are a random forest for determining the distribution type and a neural
network for the distribution's parameters. ProbSAT performs statistically
significantly better than Luby's restart strategy and the policy without
restarts when using the presented approach. The structure is particularly
advantageous on hard problems.Comment: Eurocast 201
The Configurable SAT Solver Challenge (CSSC)
It is well known that different solution strategies work well for different
types of instances of hard combinatorial problems. As a consequence, most
solvers for the propositional satisfiability problem (SAT) expose parameters
that allow them to be customized to a particular family of instances. In the
international SAT competition series, these parameters are ignored: solvers are
run using a single default parameter setting (supplied by the authors) for all
benchmark instances in a given track. While this competition format rewards
solvers with robust default settings, it does not reflect the situation faced
by a practitioner who only cares about performance on one particular
application and can invest some time into tuning solver parameters for this
application. The new Configurable SAT Solver Competition (CSSC) compares
solvers in this latter setting, scoring each solver by the performance it
achieved after a fully automated configuration step. This article describes the
CSSC in more detail, and reports the results obtained in its two instantiations
so far, CSSC 2013 and 2014
Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates
The optimization of algorithm (hyper-)parameters is crucial for achieving
peak performance across a wide range of domains, ranging from deep neural
networks to solvers for hard combinatorial problems. The resulting algorithm
configuration (AC) problem has attracted much attention from the machine
learning community. However, the proper evaluation of new AC procedures is
hindered by two key hurdles. First, AC benchmarks are hard to set up. Second
and even more significantly, they are computationally expensive: a single run
of an AC procedure involves many costly runs of the target algorithm whose
performance is to be optimized in a given AC benchmark scenario. One common
workaround is to optimize cheap-to-evaluate artificial benchmark functions
(e.g., Branin) instead of actual algorithms; however, these have different
properties than realistic AC problems. Here, we propose an alternative
benchmarking approach that is similarly cheap to evaluate but much closer to
the original AC problem: replacing expensive benchmarks by surrogate benchmarks
constructed from AC benchmarks. These surrogate benchmarks approximate the
response surface corresponding to true target algorithm performance using a
regression model, and the original and surrogate benchmark share the same
(hyper-)parameter space. In our experiments, we construct and evaluate
surrogate benchmarks for hyperparameter optimization as well as for AC problems
that involve performance optimization of solvers for hard combinatorial
problems, drawing training data from the runs of existing AC procedures. We
show that our surrogate benchmarks capture overall important characteristics of
the AC scenarios, such as high- and low-performing regions, from which they
were derived, while being much easier to use and orders of magnitude cheaper to
evaluate
A Linear Weight Transfer Rule for Local Search
The Divide and Distribute Fixed Weights algorithm (ddfw) is a dynamic local
search SAT-solving algorithm that transfers weight from satisfied to falsified
clauses in local minima. ddfw is remarkably effective on several hard
combinatorial instances. Yet, despite its success, it has received little study
since its debut in 2005. In this paper, we propose three modifications to the
base algorithm: a linear weight transfer method that moves a dynamic amount of
weight between clauses in local minima, an adjustment to how satisfied clauses
are chosen in local minima to give weight, and a weighted-random method of
selecting variables to flip. We implemented our modifications to ddfw on top of
the solver yalsat. Our experiments show that our modifications boost the
performance compared to the original ddfw algorithm on multiple benchmarks,
including those from the past three years of SAT competitions. Moreover, our
improved solver exclusively solves hard combinatorial instances that refute a
conjecture on the lower bound of two Van der Waerden numbers set forth by Ahmed
et al. (2014), and it performs well on a hard graph-coloring instance that has
been open for over three decades
Universal performance bounds of restart
As has long been known to computer scientists, the performance of
probabilistic algorithms characterized by relatively large runtime fluctuations
can be improved by applying a restart, i.e., episodic interruption of a
randomized computational procedure followed by initialization of its new
statistically independent realization. A similar effect of restart-induced
process acceleration could potentially be possible in the context of enzymatic
reactions, where dissociation of the enzyme-substrate intermediate corresponds
to restarting the catalytic step of the reaction. To date, a significant number
of analytical results have been obtained in physics and computer science
regarding the effect of restart on the completion time statistics in various
model problems, however, the fundamental limits of restart efficiency remain
unknown. Here we derive a range of universal statistical inequalities that
offer constraints on the effect that restart could impose on the completion
time of a generic stochastic process. The corresponding bounds are expressed
via simple statistical metrics of the original process such as harmonic mean
, median value and mode , and, thus, are remarkably practical. We
test our analytical predictions with multiple numerical examples, discuss
implications arising from them and important avenues of future work.Comment: 12 pages, 2 figure
Preprocessing and Stochastic Local Search in Maximum Satisfiability
Problems which ask to compute an optimal solution to its instances are called optimization problems. The maximum satisfiability (MaxSAT) problem is a well-studied combinatorial optimization problem with many applications in domains such as cancer therapy design, electronic markets, hardware debugging and routing. Many problems, including the aforementioned ones, can be encoded in MaxSAT. Thus MaxSAT serves as a general optimization paradigm and therefore advances in MaxSAT algorithms translate to advances in solving other problems.
In this thesis, we analyze the effects of MaxSAT preprocessing, the process of reformulating the input instance prior to solving, on the perceived costs of solutions during search. We show that after preprocessing most MaxSAT solvers may misinterpret the costs of non-optimal solutions. Many MaxSAT algorithms use the found non-optimal solutions in guiding the search for solutions and so the misinterpretation of costs may misguide the search.
Towards remedying this issue, we introduce and study the concept of locally minimal solutions. We show that for some of the central preprocessing techniques for MaxSAT, the perceived cost of a locally minimal solution to a preprocessed instance equals the cost of the corresponding reconstructed solution to the original instance.
We develop a stochastic local search algorithm for MaxSAT, called LMS-SLS, that is prepended with a preprocessor and that searches over locally minimal solutions. We implement LMS-SLS and analyze the performance of its different components, particularly the effects of preprocessing and computing locally minimal solutions, and also compare LMS-SLS with the state-of-the-art SLS solver SATLike for MaxSAT.