47 research outputs found
Lower Bounds for QBFs of Bounded Treewidth
The problem of deciding the validity (QSAT) of quantified Boolean formulas
(QBF) is a vivid research area in both theory and practice. In the field of
parameterized algorithmics, the well-studied graph measure treewidth turned out
to be a successful parameter. A well-known result by Chen in parameterized
complexity is that QSAT when parameterized by the treewidth of the primal graph
of the input formula together with the quantifier depth of the formula is
fixed-parameter tractable. More precisely, the runtime of such an algorithm is
polynomial in the formula size and exponential in the treewidth, where the
exponential function in the treewidth is a tower, whose height is the
quantifier depth. A natural question is whether one can significantly improve
these results and decrease the tower while assuming the Exponential Time
Hypothesis (ETH). In the last years, there has been a growing interest in the
quest of establishing lower bounds under ETH, showing mostly problem-specific
lower bounds up to the third level of the polynomial hierarchy. Still, an
important question is to settle this as general as possible and to cover the
whole polynomial hierarchy. In this work, we show lower bounds based on the ETH
for arbitrary QBFs parameterized by treewidth (and quantifier depth). More
formally, we establish lower bounds for QSAT and treewidth, namely, that under
ETH there cannot be an algorithm that solves QSAT of quantifier depth i in
runtime significantly better than i-fold exponential in the treewidth and
polynomial in the input size. In doing so, we provide a versatile reduction
technique to compress treewidth that encodes the essence of dynamic programming
on arbitrary tree decompositions. Further, we describe a general methodology
for a more fine-grained analysis of problems parameterized by treewidth that
are at higher levels of the polynomial hierarchy
Quantum supremacy using a programmable superconducting processor
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2⁵³ (about 10¹⁶). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm
Fast Construction of Relational Features for Machine Learning
Katedra kybernetik
Linear Programs with Conjunctive Queries
In this paper, we study the problem of optimizing a linear program whose variables are the answers to a conjunctive query. For this we propose the language LP(CQ) for specifying linear programs whose constraints and objective functions depend on the answer sets of conjunctive queries. We contribute an efficient algorithm for solving programs in a fragment of LP(CQ). The naive approach constructs a linear program having as many variables as there are elements in the answer set of the queries. Our approach constructs a linear program having the same optimal value but fewer variables. This is done by exploiting the structure of the conjunctive queries using generalized hypertree decompositions of small width to factorize elements of the answer set together. We illustrate the various applications of LP(CQ) programs on three examples: optimizing deliveries of resources, minimizing noise for differential privacy, and computing the s-measure of patterns in graphs as needed for data mining