7 research outputs found

    An exponential lower bound for Individualization-Refinement algorithms for Graph Isomorphism

    Full text link
    The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question whether individualization-refinement algorithms can achieve upper bounds on the running time similar to the more theoretical techniques based on a group theoretic approach. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, or adding various heuristic invariants to the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case when the kk-dimensional Weisfeiler-Leman algorithm is used to replace the standard color refinement operator and the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm.Comment: 21 page

    Polynomial-time normalizers

    No full text
    special issue in honor of Laci Babai's 60th birthday: Combinatorics, Groups, Algorithms, and ComplexityFor an integer constant d \textgreater 0, let Gamma(d) denote the class of finite groups all of whose nonabelian composition factors lie in S-d; in particular, Gamma(d) includes all solvable groups. Motivated by applications to graph-isomorphism testing, there has been extensive study of the complexity of computation for permutation groups in this class. In particular, the problems of finding set stabilizers, intersections and centralizers have all been shown to be polynomial-time computable. A notable open issue for the class Gamma(d) has been the question of whether normalizers can be found in polynomial time. We resolve this question in the affirmative. We prove that, given permutation groups G, H \textless= Sym(Omega) such that G is an element of Gamma(d), the normalizer of H in G can be found in polynomial time. Among other new procedures, our method includes a key subroutine to solve the problem of finding stabilizers of subspaces in linear representations of permutation groups in Gamma(d)

    Polynomial-time normalizers

    No full text
    special issue in honor of Laci Babai's 60th birthday: Combinatorics, Groups, Algorithms, and Complexit

    Canonicalisation of monotone SPARQL queries

    No full text
    Caching in the context of expressive query languages such as SPARQL is complicated by the difficulty of detecting equivalent queries: deciding if two conjunctive queries are equivalent is NP-complete, where adding further query features makes the problem undecidable. Despite this complexity, in this paper we propose an algorithm that performs syntactic canonicalisation of SPARQL queries such that the answers for the canonicalised query will not change versus the original. We can guarantee that the canonicalisation of two queries within a core fragment of SPARQL (monotone queries with select, project, join and union) is equal if and only if the two queries are equivalent; we also support other SPARQL features but with a weaker soundness guarantee: that the (partially) canonicalised query is equivalent to the input query. Despite the fact that canonicalisation must be harder than the equivalence problem, we show the algorithm to be practical for real-world queries taken from SPARQL endpoint logs, and further show that it detects more equivalent queries than when compared with purely syntactic methods. We also present the results of experiments over synthetic queries designed to stress-test the canonicalisation method, highlighting difficult cases
    corecore