6,281 research outputs found

    On the Satisfiability of Quantum Circuits of Small Treewidth

    Full text link
    It has been known for almost three decades that many NP\mathrm{NP}-hard optimization problems can be solved in polynomial time when restricted to structures of constant treewidth. In this work we provide the first extension of such results to the quantum setting. We show that given a quantum circuit CC with nn uninitialized inputs, poly(n)\mathit{poly}(n) gates, and treewidth tt, one can compute in time (nδ)exp(O(t))(\frac{n}{\delta})^{\exp(O(t))} a classical assignment y{0,1}ny\in \{0,1\}^n that maximizes the acceptance probability of CC up to a δ\delta additive factor. In particular, our algorithm runs in polynomial time if tt is constant and 1/poly(n)<δ<11/poly(n) < \delta < 1. For unrestricted values of tt, this problem is known to be complete for the complexity class QCMA\mathrm{QCMA}, a quantum generalization of MA. In contrast, we show that the same problem is NP\mathrm{NP}-complete if t=O(logn)t=O(\log n) even when δ\delta is constant. On the other hand, we show that given a nn-input quantum circuit CC of treewidth t=O(logn)t=O(\log n), and a constant δ<1/2\delta<1/2, it is QMA\mathrm{QMA}-complete to determine whether there exists a quantum state  ⁣φ(Cd)n\mid\!\varphi\rangle \in (\mathbb{C}^d)^{\otimes n} such that the acceptance probability of C ⁣φC\mid\!\varphi\rangle is greater than 1δ1-\delta, or whether for every such state  ⁣φ\mid\!\varphi\rangle, the acceptance probability of C ⁣φC\mid\!\varphi\rangle is less than δ\delta. As a consequence, under the widely believed assumption that QMANP\mathrm{QMA} \neq \mathrm{NP}, we have that quantum witnesses are strictly more powerful than classical witnesses with respect to Merlin-Arthur protocols in which the verifier is a quantum circuit of logarithmic treewidth.Comment: 30 Pages. A preliminary version of this paper appeared at the 10th International Computer Science Symposium in Russia (CSR 2015). This version has been submitted to a journal and is currently under revie

    An Algorithmic Metatheorem for Directed Treewidth

    Full text link
    The notion of directed treewidth was introduced by Johnson, Robertson, Seymour and Thomas [Journal of Combinatorial Theory, Series B, Vol 82, 2001] as a first step towards an algorithmic metatheory for digraphs. They showed that some NP-complete properties such as Hamiltonicity can be decided in polynomial time on digraphs of constant directed treewidth. Nevertheless, despite more than one decade of intensive research, the list of hard combinatorial problems that are known to be solvable in polynomial time when restricted to digraphs of constant directed treewidth has remained scarce. In this work we enrich this list by providing for the first time an algorithmic metatheorem connecting the monadic second order logic of graphs to directed treewidth. We show that most of the known positive algorithmic results for digraphs of constant directed treewidth can be reformulated in terms of our metatheorem. Additionally, we show how to use our metatheorem to provide polynomial time algorithms for two classes of combinatorial problems that have not yet been studied in the context of directed width measures. More precisely, for each fixed k,wNk,w \in \mathbb{N}, we show how to count in polynomial time on digraphs of directed treewidth ww, the number of minimum spanning strong subgraphs that are the union of kk directed paths, and the number of maximal subgraphs that are the union of kk directed paths and satisfy a given minor closed property. To prove our metatheorem we devise two technical tools which we believe to be of independent interest. First, we introduce the notion of tree-zig-zag number of a digraph, a new directed width measure that is at most a constant times directed treewidth. Second, we introduce the notion of zz-saturated tree slice language, a new formalism for the specification and manipulation of infinite sets of digraphs.Comment: 41 pages, 6 figures, Accepted to Discrete Applied Mathematic

    Representations of Monotone Boolean Functions by Linear Programs

    Get PDF
    We introduce the notion of monotone linear-programming circuits (MLP circuits), a model of computation for partial Boolean functions. Using this model, we prove the following results. 1. MLP circuits are superpolynomially stronger than monotone Boolean circuits. 2. MLP circuits are exponentially stronger than monotone span programs. 3. MLP circuits can be used to provide monotone feasibility interpolation theorems for Lovasz-Schrijver proof systems, and for mixed Lovasz-Schrijver proof systems. 4. The Lovasz-Schrijver proof system cannot be polynomially simulated by the cutting planes proof system. This is the first result showing a separation between these two proof systems. Finally, we discuss connections between the problem of proving lower bounds on the size of MLPs and the problem of proving lower bounds on extended formulations of polytopes

    Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function

    Get PDF
    In this work we study the relationship between size and treewidth of circuits computing variants of the element distinctness function. First, we show that for each n, any circuit of treewidth t computing the element distinctness function delta_n:{0,1}^n -> {0,1} must have size at least Omega((n^2)/(2^{O(t)}*log(n))). This result provides a non-trivial generalization of a super-linear lower bound for the size of Boolean formulas (treewidth 1) due to Neciporuk. Subsequently, we turn our attention to read-once circuits, which are circuits where each variable labels at most one input vertex. For each n, we show that any read-once circuit of treewidth t and size s computing a variant tau_n:{0,1}^n -> {0,1} of the element distinctness function must satisfy the inequality t * log(s) >= Omega(n/log(n)). Using this inequality in conjunction with known results in structural graph theory, we show that for each fixed graph H, read-once circuits computing tau_n which exclude H as a minor must have size at least Omega(n^2/log^{4}(n)). For certain well studied functions, such as the triangle-freeness function, this last lower bound can be improved to Omega(n^2/log^2(n))

    Ground Reachability and Joinability in Linear Term Rewriting Systems are Fixed Parameter Tractable with Respect to Depth

    Get PDF
    The ground term reachability problem consists in determining whether a given variable-free term t can be transformed into a given variable-free term t\u27 by the application of rules from a term rewriting system R. The joinability problem, on the other hand, consists in determining whether there exists a variable-free term t\u27\u27 which is reachable both from t and from t\u27. Both problems have proven to be of fundamental importance for several subfields of computer science. Nevertheless, these problems are undecidable even when restricted to linear term rewriting systems. In this work, we approach reachability and joinability in linear term rewriting systems from the perspective of parameterized complexity theory, and show that these problems are fixed parameter tractable with respect to the depth of derivations. More precisely, we consider a notion of parallel rewriting, in which an unbounded number of rules can be applied simultaneously to a term as long as these rules do not interfere with each other. A term t_1 can reach a term t_2 in depth d if t_2 can be obtained from t_1 by the application of d parallel rewriting steps. Our main result states that for some function f(R,d), and for any linear term rewriting system R, one can determine in time f(R,d)*|t_1|*|t_2| whether a ground term t_2 can be reached from a ground term t_1 in depth at most d by the application of rules from R. Additionally, one can determine in time f(R,d)^2*|t_1|*|t_2| whether there exists a ground term u, such that u can be reached from both t_1 and t_2 in depth at most d. Our algorithms improve exponentially on exhaustive search, which terminates in time 2^{|t_1|*2^{O(d)}}*|t_2|, and can be applied with regard to any linear term rewriting system, irrespective of whether the rewriting system in question is terminating or confluent

    Perspectivas energéticas brasileiras de 2005 a 2030

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina. Centro Sócio-Econômico. Economia.O presente trabalho examina os impactos das mudanças que estarão ocorrendo na estrutura da matriz energética brasileira com o crescimento econômico e estuda outras fontes de energia com enfoque específico e perspectiva do desenvolvimento sustentável, partindo de uma visão histórica do desenvolvimento brasileiro e sua relação com as diferentes formas de geração de energia. O estudo buscou considerar as questões de ordem cultural, econômica e social, tanto nacionais como internacionais, para compreender as razões de cada decisão governamental ao longo dos anos, na construção de uma política energética sustentável para o Brasil. Saímos de um período em que o sentimento dominante era que valia a pena crescer a qualquer custo para uma preocupação cada vez maior com a necessidade de um crescimento sustentável. Nesse contexto, as concepções teóricas de autores brasileiros e latino-americanos apresentaram questões importantes que situam a temática com ênfase a uma linha interpretativa alternativa que busca uma visão humanizada e realística. Um dos documentos usados como ponto de partida para a análise da matriz energética e perspectivas foi o Plano Nacional de Energia 2030, do Ministério de Minas e Energia, que apresenta uma análise abrangente da situação e das perspectivas para o Brasil

    Revisiting the Parameterized Complexity of Maximum-Duo Preservation String Mapping

    Get PDF
    In the Maximum-Duo Preservation String Mapping (Max-Duo PSM) problem, the input consists of two related strings A and B of length n and a nonnegative integer k. The objective is to determine whether there exists a mapping m from the set of positions of A to the set of positions of B that maps only to positions with the same character and preserves at least k duos, which are pairs of adjacent positions. We develop a randomized algorithm that solves Max-Duo PSM in time 4^k * n^{O(1)}, and a deterministic algorithm that solves this problem in time 6.855^k * n^{O(1)}. The previous best known (deterministic) algorithm for this problem has running time (8e)^{2k+o(k)} * n^{O(1)} [Beretta et al., Theor. Comput. Sci. 2016]. We also show that Max-Duo PSM admits a problem kernel of size O(k^3), improving upon the previous best known problem kernel of size O(k^6)

    Exclusive heavy quark-pair production in ultraperipheral collisions

    Full text link
    In this article, we study the fully differential observables of exclusive production of heavy (charm and bottom) quark pairs in high-energy ultraperipheral pApA and AAAA collisions. In these processes, the nucleus AA serves as an efficient source of the photon flux, while the QCD interaction of the produced heavy-quark pair with the target (pp or AA) proceeds via an exchange of gluons in a color singlet state, described by the gluon Wigner distribution. The corresponding predictions for differential cross sections were obtained by using the dipole SS-matrix in the McLerran--Venugopalan saturation model with impact parameter dependence for the nucleus target, and its recent generalization, for the proton target. Prospects of experimental constraints on the gluon Wigner distribution in this class of reactions are discussed.Comment: 21 pages, 18 figures, improved conclusio
    corecore