7,941 research outputs found
An exact solution to the minimum size test pattern problem
This article addresses the problem of test pattern generation for single stuck-at faults in combinational circuits, under the additional constraint that the number of specified primary input assignments is minimized. This problem has different applications in testing, including the identification of “don’t care ” conditions to be used in the synthesis of Built-In Self-Test (BIST) logic. The proposed solution is based on an integer linear programming (ILP) formulation which builds on an existing Propositional Satisfiability (SAT) model for test pattern generation. The resulting ILP formulation is linear on the size of the original SAT model for test generation, which is linear on the size of the circuit. Nevertheless, the resulting ILP instances represent complex optimization problems, that require dedicated ILP algorithms. Preliminary results on benchmark circuits validate the practical applicability of the test pattern minimization model and associated ILP algorithm
Model enumeration in propositional circumscription via unsatisfiable core analysis
Many practical problems are characterized by a preference relation over
admissible solutions, where preferred solutions are minimal in some sense. For
example, a preferred diagnosis usually comprises a minimal set of reasons that
is sufficient to cause the observed anomaly. Alternatively, a minimal
correction subset comprises a minimal set of reasons whose deletion is
sufficient to eliminate the observed anomaly. Circumscription formalizes such
preference relations by associating propositional theories with minimal models.
The resulting enumeration problem is addressed here by means of a new algorithm
taking advantage of unsatisfiable core analysis. Empirical evidence of the
efficiency of the algorithm is given by comparing the performance of the
resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the
enumeration of minimal models for problems originating from practical
applications.
This paper is under consideration for acceptance in TPLP.Comment: 15 pages, 2 algorithms, 2 tables, 2 figures, ICL
SAT-based Compressive Sensing
We propose to reduce the original well-posed problem of compressive sensing
to weighted-MAX-SAT. Compressive sensing is a novel randomized data acquisition
approach that linearly samples sparse or compressible signals at a rate much
below the Nyquist-Shannon sampling rate. The original problem of compressive
sensing in sparse recovery is NP-hard; therefore, in addition to restrictions
for the uniqueness of the sparse solution, the coding matrix has also to
satisfy additional stringent constraints -usually the restricted isometry
property (RIP)- so we can handle it by its convex or nonconvex relaxations. In
practice, such constraints are not only intractable to be verified but also
invalid in broad applications. We first divide the well-posed problem of
compressive sensing into relaxed sub-problems and represent them as separate
SAT instances in conjunctive normal form (CNF). After merging the resulting
sub-problems, we assign weights to all clauses in such a way that the
aggregated weighted-MAX-SAT can guarantee successful recovery of the original
signal. The only requirement in our approach is the solution uniqueness of the
associated problems, which is notably looser. As a proof of concept, we
demonstrate the applicability of our approach in tackling the original problem
of binary compressive sensing with binary design matrices. Experimental results
demonstrate the supremacy of the proposed SAT-based compressive sensing over
the -minimization in the robust recovery of sparse binary signals.
SAT-based compressive sensing on average requires 8.3% fewer measurements for
exact recovery of highly sparse binary signals (). When , the -minimization on average requires 22.2% more
measurements for exact reconstruction of the binary signals. Thus, the proposed
SAT-based compressive sensing is less sensitive to the sparsity of the original
signals
Logic Synthesis for Quantum Computing
We present a synthesis framework to map logic networks into quantum circuits
for quantum computing. The synthesis framework is based on LUT networks
(lookup-table networks), which play a key role in conventional logic synthesis.
Establishing a connection between LUTs in a LUT network and reversible
single-target gates in a reversible network allows us to bridge conventional
logic synthesis with logic synthesis for quantum computing, despite several
fundamental differences. We call our synthesis framework LUT-based Hierarchical
Reversible Logic Synthesis (LHRS). Input to LHRS is a classical logic network;
output is a quantum network (realized in terms of Clifford+ gates). The
framework offers to trade-off the number of qubits for the number of quantum
gates. In a first step, an initial network is derived that only consists of
single-target gates and already completely determines the number of qubits in
the final quantum network. Different methods are then used to map each
single-target gate into Clifford+ gates, while aiming at optimally using
available resources. We demonstrate the effectiveness of our method in
automatically synthesizing IEEE compliant floating point networks up to double
precision. As many quantum algorithms target scientific simulation
applications, they can make rich use of floating point arithmetic components.
But due to the lack of quantum circuit descriptions for those components, it
can be difficult to find a realistic cost estimation for the algorithms. Our
synthesized benchmarks provide cost estimates that allow quantum algorithm
designers to provide the first complete cost estimates for a host of quantum
algorithms. Thus, the benchmarks and, more generally, the LHRS framework are an
essential step towards the goal of understanding which quantum algorithms will
be practical in the first generations of quantum computers.Comment: 15 pages, 10 figure
Computing Minimal Sets on Propositional Formulae I: Problems & Reductions
Boolean Satisfiability (SAT) is arguably the archetypical NP-complete
decision problem. Progress in SAT solving algorithms has motivated an ever
increasing number of practical applications in recent years. However, many
practical uses of SAT involve solving function as opposed to decision problems.
Concrete examples include computing minimal unsatisfiable subsets, minimal
correction subsets, prime implicates and implicants, minimal models, backbone
literals, and autarkies, among several others. In most cases, solving a
function problem requires a number of adaptive or non-adaptive calls to a SAT
solver. Given the computational complexity of SAT, it is therefore important to
develop algorithms that either require the smallest possible number of calls to
the SAT solver, or that involve simpler instances. This paper addresses a
number of representative function problems defined on Boolean formulas, and
shows that all these function problems can be reduced to a generic problem of
computing a minimal set subject to a monotone predicate. This problem is
referred to as the Minimal Set over Monotone Predicate (MSMP) problem. This
exercise provides new ways for solving well-known function problems, including
prime implicates, minimal correction subsets, backbone literals, independent
variables and autarkies, among several others. Moreover, this exercise
motivates the development of more efficient algorithms for the MSMP problem.
Finally the paper outlines a number of areas of future research related with
extensions of the MSMP problem.Comment: This version contains some fixes in formatting and bibliograph
Differentiable Satisfiability and Differentiable Answer Set Programming for Sampling-Based Multi-Model Optimization
We propose Differentiable Satisfiability and Differentiable Answer Set
Programming (Differentiable SAT/ASP) for multi-model optimization. Models
(answer sets or satisfying truth assignments) are sampled using a novel SAT/ASP
solving approach which uses a gradient descent-based branching mechanism.
Sampling proceeds until the value of a user-defined multi-model cost function
reaches a given threshold. As major use cases for our approach we propose
distribution-aware model sampling and expressive yet scalable probabilistic
logic programming. As our main algorithmic approach to Differentiable SAT/ASP,
we introduce an enhancement of the state-of-the-art CDNL/CDCL algorithm for
SAT/ASP solving. Additionally, we present alternative algorithms which use an
unmodified ASP solver (Clingo/clasp) and map the optimization task to
conventional answer set optimization or use so-called propagators. We also
report on the open source software DelSAT, a recent prototype implementation of
our main algorithm, and on initial experimental results which indicate that
DelSATs performance is, when applied to the use case of probabilistic logic
inference, on par with Markov Logic Network (MLN) inference performance,
despite having advantageous properties compared to MLNs, such as the ability to
express inductive definitions and to work with probabilities as weights
directly in all cases. Our experiments also indicate that our main algorithm is
strongly superior in terms of performance compared to the presented alternative
approaches which reduce a common instance of the general problem to regular
SAT/ASP.Comment: Extended and revised version of a paper in the Proceedings of the 5th
International Workshop on Probabilistic Logic Programming (PLP2018
Cover Combinatorial Filters and their Minimization Problem (Extended Version)
Recent research has examined algorithms to minimize robots' resource
footprints. The class of combinatorial filters (discrete variants of
widely-used probabilistic estimators) has been studied and methods for reducing
their space requirements introduced. This paper extends existing combinatorial
filters by introducing a natural generalization that we dub cover combinatorial
filters. In addressing the new -- but still NP-complete -- problem of
minimization of cover filters, this paper shows that multiple concepts
previously believed to be true about combinatorial filters (and actually
conjectured, claimed, or assumed to be) are in fact false. For instance,
minimization does not induce an equivalence relation. We give an exact
algorithm for the cover filter minimization problem. Unlike prior work (based
on graph coloring) we consider a type of clique-cover problem, involving a new
conditional constraint, from which we can find more general relations. In
addition to solving the more general problem, the algorithm also corrects flaws
present in all prior filter reduction methods. In employing SAT, the algorithm
provides a promising basis for future practical development.Comment: 20 pages, 9 figures, WAFR 202
Solving SAT and MaxSAT with a Quantum Annealer: Foundations, Encodings, and Preliminary Results
Quantum annealers (QAs) are specialized quantum computers that minimize
objective functions over discrete variables by physically exploiting quantum
effects. Current QA platforms allow for the optimization of quadratic
objectives defined over binary variables (qubits), also known as Ising
problems. In the last decade, QA systems as implemented by D-Wave have scaled
with Moore-like growth. Current architectures provide 2048 sparsely-connected
qubits, and continued exponential growth is anticipated, together with
increased connectivity. We explore the feasibility of such architectures for
solving SAT and MaxSAT problems as QA systems scale. We develop techniques for
effectively encoding SAT -and, with some limitations, MaxSAT- into Ising
problems compatible with sparse QA architectures. We provide the theoretical
foundations for this mapping, and present encoding techniques that combine
offline Satisfiability and Optimization Modulo Theories with on-the-fly
placement and routing. Preliminary empirical tests on a current generation
2048-qubit D-Wave system support the feasibility of the approach for certain
SAT and MaxSAT problems.Comment: under submission to Information and Computatio
Design Space Exploration as Quantified Satisfaction
We present novel algorithms for design and design space exploration. The
designs discovered by these algorithms are compositions of function types
specified in component libraries. Our algorithms reduce the design problem to
quantified satisfiability and use advanced solvers to find solutions that
represent useful systems.
The algorithms we present in this paper are sound and complete and are
guaranteed to discover correct designs of optimal size, if they exist. We apply
our method to the design of Boolean systems and discover new and more optimal
classical digital and quantum circuits for common arithmetic functions such as
addition and multiplication.
The performance of our algorithms is evaluated through extensive
experimentation. We created a benchmark consisting of specifications of
scalable synthetic digital circuits and real-world mirochips. We have generated
multiple circuits functionally equivalent to the ones in the benchmark. The
quantified satisfiability method shows more than four orders of magnitude
speed-up, compared to a generate and test method that enumerates all
non-isomorphic circuit topologies.
Our approach generalizes circuit optimization. It uses arbitrary component
libraries and has applications to areas such as digital circuit design,
diagnostics, abductive reasoning, test vector generation, and combinatorial
optimization
Approximating minimum representations of key Horn functions
Horn functions form a subclass of Boolean functions and appear in many
different areas of computer science and mathematics as a general tool to
describe implications and dependencies. Finding minimum sized representations
for such functions with respect to most commonly used measures is a
computationally hard problem that remains hard even for the important subclass
of key Horn functions. In this paper we provide logarithmic factor
approximation algorithms for key Horn functions with respect to all measures
studied in the literature for which the problem is known to be hard.Comment: 23 page
- …