57 research outputs found

    Computational complexity of reconstruction and isomorphism testing for designs and line graphs

    Get PDF
    Graphs with high symmetry or regularity are the main source for experimentally hard instances of the notoriously difficult graph isomorphism problem. In this paper, we study the computational complexity of isomorphism testing for line graphs of tt-(v,k,λ)(v,k,\lambda) designs. For this class of highly regular graphs, we obtain a worst-case running time of O(vlogv+O(1))O(v^{\log v + O(1)}) for bounded parameters t,k,λt,k,\lambda. In a first step, our approach makes use of the Babai--Luks algorithm to compute canonical forms of tt-designs. In a second step, we show that tt-designs can be reconstructed from their line graphs in polynomial-time. The first is algebraic in nature, the second purely combinatorial. For both, profound structural knowledge in design theory is required. Our results extend earlier complexity results about isomorphism testing of graphs generated from Steiner triple systems and block designs.Comment: 12 pages; to appear in: "Journal of Combinatorial Theory, Series A

    The Minimum Generating Set Problem

    Full text link
    Let GG be a finite group. In order to determine the smallest cardinality d(G)d(G) of a generating set of GG and a generating set with this cardinality, one should repeat many times the test whether a subset of GG of small cardinality generates GG. We prove that if a chief series of GG is known, then the numbers of these generating tests can be drastically reduced. At most G13/5|G|^{13/5} subsets must be tested. This implies that the minimum generating set problem for a finite group GG can be solved in polynomial time

    Subject Index Volumes 1–200

    Get PDF

    ON EQUIVALENCY REASONING FOR CONFLICT DRIVEN CLAUSE LEARNING SATISFIABILITY SOLVERS

    Get PDF
    Satisfiability problem or SAT is the problem of deciding whether a Boolean function evaluates to true for at least one of the assignments in its domain. The satisfiability problem is the first problem to be proved NP-complete. Therefore, the problems in NP can be encoded into SAT instances. Many hard real world problems can be solved when encoded efficiently into SAT instances. These facts give SAT an important place in both theoretical and practical computer science. In this thesis we address the problem of integrating a special class of equivalency reasoning techniques, the strongly connected components or SCC based reasoning, into the class of conflict driven clause learning or CDCL SAT solvers. Because of the complications that arise from integrating the equivalency reasoning in CDCL SAT solvers, to our knowledge, there has been no CDCL solver which has applied SCC based equivalency reasoning dynamically during the search. We propose a method to overcome these complications. The method is integrated into a prominent satisfiability solver: MiniSat. The equivalency enhanced MiniSat, Eq-MiniSat, is used to explore the advantages and disadvantages of the equivalency reasoning in conflict clause learning satisfiability solvers. Different implementation approaches for Eq-MiniSat are discussed. The experimental results on 16 families of instances shows that equivalency reasoning does not have noticeable effects for the instances in one family. The equivalency reasoning enables Eq-MiniSat to outperform MiniSat on eight classes of instances. For the remaining seven families, MiniSat outperforms Eq- MiniSat. The experimental results for random instances demonstrate that almost in all cases the number of branchings for Eq-Minisat is smaller than Minisat

    Linear Space Data Structures for Finite Groups with Constant Query-Time

    Get PDF

    Automated theory formation in pure mathematics

    Get PDF
    The automation of specific mathematical tasks such as theorem proving and algebraic manipulation have been much researched. However, there have only been a few isolated attempts to automate the whole theory formation process. Such a process involves forming new concepts, performing calculations, making conjectures, proving theorems and finding counterexamples. Previous programs which perform theory formation are limited in their functionality and their generality. We introduce the HR program which implements a new model for theory formation. This model involves a cycle of mathematical activity, whereby concepts are formed, conjectures about the concepts are made and attempts to settle the conjectures are undertaken.HR has seven general production rules for producing a new concept from old ones and employs a best first search by building new concepts from the most interesting old ones. To enable this, HR has various measures which estimate the interestingness of a concept. During concept formation, HR uses empirical evidence to suggest conjectures and employs the Otter theorem prover to attempt to prove a given conjecture. If this fails, HR will invoke the MACE model generator to attempt to disprove the conjecture by finding a counterexample. Information and new knowledge arising from the attempt to settle a conjecture is used to assess the concepts involved in the conjecture, which fuels the heuristic search and closes the cycle.The main aim of the project has been to develop our model of theory formation and to implement this in HR. To describe the project in the thesis, we first motivate the problem of automated theory formation and survey the literature in this area. We then discuss how HR invents concepts, makes and settles conjectures and how it assesses the concepts and conjectures to facilitate a heuristic search. We present results to evaluate HR in terms of the quality of the theories it produces and the effectiveness of its techniques. A secondary aim of the project has been to apply HR to mathematical discovery and we discuss how HR has successfully invented new concepts and conjectures in number theory
    corecore