25,104 research outputs found
Algorithms for Weighted Boolean Optimization
The Pseudo-Boolean Optimization (PBO) and Maximum Satisfiability (MaxSAT)
problems are natural optimization extensions of Boolean Satisfiability (SAT).
In the recent past, different algorithms have been proposed for PBO and for
MaxSAT, despite the existence of straightforward mappings from PBO to MaxSAT
and vice-versa. This papers proposes Weighted Boolean Optimization (WBO), a new
unified framework that aggregates and extends PBO and MaxSAT. In addition, the
paper proposes a new unsatisfiability-based algorithm for WBO, based on recent
unsatisfiability-based algorithms for MaxSAT. Besides standard MaxSAT, the new
algorithm can also be used to solve weighted MaxSAT and PBO, handling
pseudo-Boolean constraints either natively or by translation to clausal form.
Experimental results illustrate that unsatisfiability-based algorithms for
MaxSAT can be orders of magnitude more efficient than existing dedicated
algorithms. Finally, the paper illustrates how other algorithms for either PBO
or MaxSAT can be extended to WBO.Comment: 14 pages, 2 algorithms, 3 tables, 1 figur
Boolean lexicographic optimization: algorithms & applications
Multi-Objective Combinatorial Optimization (MOCO) problems find a
wide range of practical application problems, some of which involving Boolean
variables and constraints. This paper develops and evaluates algorithms for solving
MOCO problems, defined on Boolean domains, and where the optimality criterion
is lexicographic. The proposed algorithms build on existing algorithms for either
Maximum Satisfiability (MaxSAT), Pseudo-Boolean Optimization (PBO), or Integer
Linear Programming (ILP). Experimental results, obtained on problem instances
from haplotyping with pedigrees and software package dependencies, show that
the proposed algorithms can provide significant performance gains over state of
the art MaxSAT, PBO and ILP algorithms. Finally, the paper also shows that
lexicographic optimization conditions are observed in the majority of the problem
instances from the MaxSAT evaluations, motivating the development of dedicated
algorithms that can exploit lexicographic optimization conditions in general MaxSAT
problem instances.This work was partially funded by SFI PI Grant 09/IN.1/I2618, EU grants FP7-ICT-217069 and FP7-ICT-214898, FCT grant ATTEST (CMU-PT/ELE/0009/2009), FCT PhD grant SFRH/BD/ 28599/2006, CICYT Projects TIN2009-14704-C03-01 and TIN2010-20967-C04-03, and by INESC-ID multiannual funding from the PIDDAC program funds
borealis - A generalized global update algorithm for Boolean optimization problems
Optimization problems with Boolean variables that fall into the
nondeterministic polynomial (NP) class are of fundamental importance in
computer science, mathematics, physics and industrial applications. Most
notably, solving constraint-satisfaction problems, which are related to
spin-glass-like Hamiltonians in physics, remains a difficult numerical task. As
such, there has been great interest in designing efficient heuristics to solve
these computationally difficult problems. Inspired by parallel tempering Monte
Carlo in conjunction with the rejection-free isoenergetic cluster algorithm
developed for Ising spin glasses, we present a generalized global update
optimization heuristic that can be applied to different NP-complete problems
with Boolean variables. The global cluster updates allow for a wide-spread
sampling of phase space, thus considerably speeding up optimization. By
carefully tuning the pseudo-temperature (needed to randomize the
configurations) of the problem, we show that the method can efficiently tackle
optimization problems with over-constraints or on topologies with a large
site-percolation threshold. We illustrate the efficiency of the heuristic on
paradigmatic optimization problems, such as the maximum satisfiability problem
and the vertex cover problem.Comment: 19 pages, 7 figures, 1 tabl
SAT-based Compressive Sensing
We propose to reduce the original well-posed problem of compressive sensing
to weighted-MAX-SAT. Compressive sensing is a novel randomized data acquisition
approach that linearly samples sparse or compressible signals at a rate much
below the Nyquist-Shannon sampling rate. The original problem of compressive
sensing in sparse recovery is NP-hard; therefore, in addition to restrictions
for the uniqueness of the sparse solution, the coding matrix has also to
satisfy additional stringent constraints -usually the restricted isometry
property (RIP)- so we can handle it by its convex or nonconvex relaxations. In
practice, such constraints are not only intractable to be verified but also
invalid in broad applications. We first divide the well-posed problem of
compressive sensing into relaxed sub-problems and represent them as separate
SAT instances in conjunctive normal form (CNF). After merging the resulting
sub-problems, we assign weights to all clauses in such a way that the
aggregated weighted-MAX-SAT can guarantee successful recovery of the original
signal. The only requirement in our approach is the solution uniqueness of the
associated problems, which is notably looser. As a proof of concept, we
demonstrate the applicability of our approach in tackling the original problem
of binary compressive sensing with binary design matrices. Experimental results
demonstrate the supremacy of the proposed SAT-based compressive sensing over
the -minimization in the robust recovery of sparse binary signals.
SAT-based compressive sensing on average requires 8.3% fewer measurements for
exact recovery of highly sparse binary signals (). When , the -minimization on average requires 22.2% more
measurements for exact reconstruction of the binary signals. Thus, the proposed
SAT-based compressive sensing is less sensitive to the sparsity of the original
signals
Learning and Optimization with Submodular Functions
In many naturally occurring optimization problems one needs to ensure that
the definition of the optimization problem lends itself to solutions that are
tractable to compute. In cases where exact solutions cannot be computed
tractably, it is beneficial to have strong guarantees on the tractable
approximate solutions. In order operate under these criterion most optimization
problems are cast under the umbrella of convexity or submodularity. In this
report we will study design and optimization over a common class of functions
called submodular functions. Set functions, and specifically submodular set
functions, characterize a wide variety of naturally occurring optimization
problems, and the property of submodularity of set functions has deep
theoretical consequences with wide ranging applications. Informally, the
property of submodularity of set functions concerns the intuitive "principle of
diminishing returns. This property states that adding an element to a smaller
set has more value than adding it to a larger set. Common examples of
submodular monotone functions are entropies, concave functions of cardinality,
and matroid rank functions; non-monotone examples include graph cuts, network
flows, and mutual information.
In this paper we will review the formal definition of submodularity; the
optimization of submodular functions, both maximization and minimization; and
finally discuss some applications in relation to learning and reasoning using
submodular functions.Comment: Tech Report - USC Computer Science CS-599, Convex and Combinatorial
Optimizatio
On the representation of Boolean and real functions as Hamiltonians for quantum computing
Mapping functions on bits to Hamiltonians acting on qubits has many
applications in quantum computing. In particular, Hamiltonians representing
Boolean functions are required for applications of quantum annealing or the
quantum approximate optimization algorithm to combinatorial optimization
problems. We show how such functions are naturally represented by Hamiltonians
given as sums of Pauli operators (Ising spin operators) with the terms of
the sum corresponding to the function's Fourier expansion. For many classes of
functions which are given by a compact description, such as a Boolean formula
in conjunctive normal form that gives an instance of the satisfiability
problem, it is #P-hard to compute its Hamiltonian representation. On the other
hand, no such difficulty exists generally for constructing Hamiltonians
representing a real function such as a sum of local Boolean clauses. We give
composition rules for explicitly constructing Hamiltonians representing a wide
variety of Boolean and real functions by combining Hamiltonians representing
simpler clauses as building blocks. We apply our results to the construction of
controlled-unitary operators, and to the special case of operators that compute
function values in an ancilla qubit register. Finally, we outline several
additional applications and extensions of our results.
A primary goal of this paper is to provide a which may be utilized by experts and practitioners alike
in the construction and analysis of new quantum algorithms, and at the same
time to demystify the various constructions appearing in the literature
Algorithms for Weighted Sums of Squares Decomposition of Non-negative Univariate Polynomials
It is well-known that every non-negative univariate real polynomial can be
written as the sum of two polynomial squares with real coefficients. When one
allows a weighted sum of finitely many squares instead of a sum of two squares,
then one can choose all coefficients in the representation to lie in the field
generated by the coefficients of the polynomial.
In this article, we describe, analyze and compare both from the theoretical
and practical points of view, two algorithms computing such a weighted sums of
squares decomposition for univariate polynomials with rational coefficients.
The first algorithm, due to the third author relies on real root isolation,
quadratic approximations of positive polynomials and square-free decomposition
but its complexity was not analyzed. We provide bit complexity estimates, both
on runtime and output size of this algorithm. They are exponential in the
degree of the input univariate polynomial and linear in the maximum bitsize of
its complexity. This analysis is obtained using quantifier elimination and root
isolation bounds.
The second algorithm, due to Chevillard, Harrison, Joldes and Lauter, relies
on complex root isolation and square-free decomposition and has been introduced
for certifying positiveness of polynomials in the context of computer
arithmetics. Again, its complexity was not analyzed. We provide bit complexity
estimates, both on runtime and output size of this algorithm, which are
polynomial in the degree of the input polynomial and linear in the maximum
bitsize of its complexity. This analysis is obtained using Vieta's formula and
root isolation bounds.
Finally, we report on our implementations of both algorithms. While the
second algorithm is, as expected from the complexity result, more efficient on
most of examples, we exhibit families of non-negative polynomials for which the
first algorithm is better.Comment: 22 pages, 4 table
Solving SAT and MaxSAT with a Quantum Annealer: Foundations, Encodings, and Preliminary Results
Quantum annealers (QAs) are specialized quantum computers that minimize
objective functions over discrete variables by physically exploiting quantum
effects. Current QA platforms allow for the optimization of quadratic
objectives defined over binary variables (qubits), also known as Ising
problems. In the last decade, QA systems as implemented by D-Wave have scaled
with Moore-like growth. Current architectures provide 2048 sparsely-connected
qubits, and continued exponential growth is anticipated, together with
increased connectivity. We explore the feasibility of such architectures for
solving SAT and MaxSAT problems as QA systems scale. We develop techniques for
effectively encoding SAT -and, with some limitations, MaxSAT- into Ising
problems compatible with sparse QA architectures. We provide the theoretical
foundations for this mapping, and present encoding techniques that combine
offline Satisfiability and Optimization Modulo Theories with on-the-fly
placement and routing. Preliminary empirical tests on a current generation
2048-qubit D-Wave system support the feasibility of the approach for certain
SAT and MaxSAT problems.Comment: under submission to Information and Computatio
Advanced Datapath Synthesis using Graph Isomorphism
This paper presents an advanced DAG-based algorithm for datapath synthesis
that targets area minimization using logic-level resource sharing. The problem
of identifying common specification logic is formulated using unweighted graph
isomorphism problem, in contrast to a weighted graph isomorphism using AIGs. In
the context of gate-level datapath circuits, our algorithm solves the un-
weighted graph isomorphism problem in linear time. The experiments are
conducted within an industrial synthesis flow that includes the complete
high-level synthesis, logic synthesis and placement and route procedures.
Experimental results show a significant runtime improvements compared to the
existing datapath synthesis algorithms.Comment: 6 pages, 8 figures. To appear in 2017 IEEE/ACM International
Conference on Computer-Aided Design (ICCAD'17
Hinge-Loss Markov Random Fields and Probabilistic Soft Logic
A fundamental challenge in developing high-impact machine learning
technologies is balancing the need to model rich, structured domains with the
ability to scale to big data. Many important problem areas are both richly
structured and large scale, from social and biological networks, to knowledge
graphs and the Web, to images, video, and natural language. In this paper, we
introduce two new formalisms for modeling structured data, and show that they
can both capture rich structure and scale to big data. The first, hinge-loss
Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model
that generalizes different approaches to convex inference. We unite three
approaches from the randomized algorithms, probabilistic graphical models, and
fuzzy logic communities, showing that all three lead to the same inference
objective. We then define HL-MRFs by generalizing this unified objective. The
second new formalism, probabilistic soft logic (PSL), is a probabilistic
programming language that makes HL-MRFs easy to define using a syntax based on
first-order logic. We introduce an algorithm for inferring most-probable
variable assignments (MAP inference) that is much more scalable than
general-purpose convex optimization methods, because it uses message passing to
take advantage of sparse dependency structures. We then show how to learn the
parameters of HL-MRFs. The learned HL-MRFs are as accurate as analogous
discrete models, but much more scalable. Together, these algorithms enable
HL-MRFs and PSL to model rich, structured data at scales not previously
possible
- …