328 research outputs found
FourierSAT: A Fourier Expansion-Based Algebraic Framework for Solving Hybrid Boolean Constraints
The Boolean SATisfiability problem (SAT) is of central importance in computer
science. Although SAT is known to be NP-complete, progress on the engineering
side, especially that of Conflict-Driven Clause Learning (CDCL) and Local
Search SAT solvers, has been remarkable. Yet, while SAT solvers aimed at
solving industrial-scale benchmarks in Conjunctive Normal Form (CNF) have
become quite mature, SAT solvers that are effective on other types of
constraints, e.g., cardinality constraints and XORs, are less well studied; a
general approach to handling non-CNF constraints is still lacking. In addition,
previous work indicated that for specific classes of benchmarks, the running
time of extant SAT solvers depends heavily on properties of the formula and
details of encoding, instead of the scale of the benchmarks, which adds
uncertainty to expectations of running time.
To address the issues above, we design FourierSAT, an incomplete SAT solver
based on Fourier analysis of Boolean functions, a technique to represent
Boolean functions by multilinear polynomials. By such a reduction to continuous
optimization, we propose an algebraic framework for solving systems consisting
of different types of constraints. The idea is to leverage gradient information
to guide the search process in the direction of local improvements. Empirical
results demonstrate that FourierSAT is more robust than other solvers on
certain classes of benchmarks.Comment: The paper was accepted by Thirty-Fourth AAAI Conference on Artificial
Intelligence (AAAI 2020). V2 (Feb 24): Typos correcte
Chrisimos: A useful Proof-of-Work for finding Minimal Dominating Set of a graph
Hash-based Proof-of-Work (PoW) used in the Bitcoin Blockchain leads to high
energy consumption and resource wastage. In this paper, we aim to re-purpose
the energy by replacing the hash function with real-life problems having
commercial utility. We propose Chrisimos, a useful Proof-of-Work where miners
are required to find a minimal dominating set for real-life graph instances. A
miner who is able to output the smallest dominating set for the given graph
within the block interval time wins the mining game. We also propose a new
chain selection rule that ensures the security of the scheme. Thus our protocol
also realizes a decentralized minimal dominating set solver for any graph
instance. We provide formal proof of correctness and show via experimental
results that the block interval time is within feasible bounds of hash-based
PoW.Comment: 20 pages, 3 figures. An abridged version of the paper got accepted in
The International Symposium on Intelligent and Trustworthy Computing,
Communications, and Networking (ITCCN-2023) held in conjunction with the 22nd
IEEE International Conference on Trust, Security and Privacy in Computing and
Communications (TrustCom-2023
Robust Principal Component Analysis?
This paper is about a curious phenomenon. Suppose we have a data matrix,
which is the superposition of a low-rank component and a sparse component. Can
we recover each component individually? We prove that under some suitable
assumptions, it is possible to recover both the low-rank and the sparse
components exactly by solving a very convenient convex program called Principal
Component Pursuit; among all feasible decompositions, simply minimize a
weighted combination of the nuclear norm and of the L1 norm. This suggests the
possibility of a principled approach to robust principal component analysis
since our methodology and results assert that one can recover the principal
components of a data matrix even though a positive fraction of its entries are
arbitrarily corrupted. This extends to the situation where a fraction of the
entries are missing as well. We discuss an algorithm for solving this
optimization problem, and present applications in the area of video
surveillance, where our methodology allows for the detection of objects in a
cluttered background, and in the area of face recognition, where it offers a
principled way of removing shadows and specularities in images of faces
Static reliability and resilience in dynamic systems
Two systems are modeled in this thesis. First, we consider a multi-component stochastic monotone binary system, or SMBS for short. The reliability of an SMBS is the probability of correct operation. A statistical approximation of the system reliability is provided for these systems, inspired in Monte Carlo Methods. Then, we are focused on the diameter constrained reliability model (DCR), which was originally developed for delay sensitive applications over the Internet infrastructure. The computational complexity of the DCR is analyzed. Networks with an efficient (i.e., polynomial time) DCR computation are offered, termed Weak graphs. Second, we model the effect of a dynamic epidemic propagation. Our first approach is to develop a SIR-based simulation, where unrealistic assumptions for SIR model (infinite, homogeneous, fully-mixed population) are discarded. Finally, we formalize a stochastic rocess that counts infected individuals, and further investigate node-immunization strategies, subject to a budget nstraint. A combinatorial optimization problem is here introduced, called Graph Fragmentation Problem. There, the impact of a highly virulent epidemic propagation is analyzed, and we mathematically prove that Greedy heuristic is suboptimal
Proceedings of the 8th Cologne-Twente Workshop on Graphs and Combinatorial Optimization
International audienceThe Cologne-Twente Workshop (CTW) on Graphs and Combinatorial Optimization started off as a series of workshops organized bi-annually by either Köln University or Twente University. As its importance grew over time, it re-centered its geographical focus by including northern Italy (CTW04 in Menaggio, on the lake Como and CTW08 in Gargnano, on the Garda lake). This year, CTW (in its eighth edition) will be staged in France for the first time: more precisely in the heart of Paris, at the Conservatoire National d’Arts et Métiers (CNAM), between 2nd and 4th June 2009, by a mixed organizing committee with members from LIX, Ecole Polytechnique and CEDRIC, CNAM
Database query optimisation based on measures of regret
The query optimiser in a database management system (DBMS) is responsible for
�nding a good order in which to execute the operators in a given query. However, in
practice the query optimiser does not usually guarantee to �nd the best plan. This is
often due to the non-availability of precise statistical data or inaccurate assumptions
made by the optimiser. In this thesis we propose a robust approach to logical query
optimisation that takes into account the unreliability in database statistics during
the optimisation process. In particular, we study the ordering problem for selection
operators and for join operators, where selectivities are modelled as intervals rather
than exact values. As a measure of optimality, we use a concept from decision theory
called minmax regret optimisation (MRO).
When using interval selectivities, the decision problem for selection operator ordering
turns out to be NP-hard. After investigating properties of the problem and
identifying special cases which can be solved in polynomial time, we develop a novel
heuristic for solving the general selection ordering problem in polynomial time. Experimental
evaluation of the heuristic using synthetic data, the Star Schema Benchmark
and real-world data sets shows that it outperforms other heuristics (which take
an optimistic, pessimistic or midpoint approach) and also produces plans whose regret
is on average very close to optimal.
The general join ordering problem is known to be NP-hard, even for exact selectivities.
So, for interval selectivities, we restrict our investigation to sets of join
operators which form a chain and to plans that correspond to left-deep join trees.
We investigate properties of the problem and use these, along with ideas from the
selection ordering heuristic and other algorithms in the literature, to develop a
polynomial-time heuristic tailored for the join ordering problem. Experimental evaluation
of the heuristic shows that, once again, it performs better than the optimistic,
pessimistic and midpoint heuristics. In addition, the results show that the heuristic
produces plans whose regret is on average even closer to the optimal than for
selection ordering
- …