1,097 research outputs found

    Breaking Instance-Independent Symmetries In Exact Graph Coloring

    Full text link
    Code optimization and high level synthesis can be posed as constraint satisfaction and optimization problems, such as graph coloring used in register allocation. Graph coloring is also used to model more traditional CSPs relevant to AI, such as planning, time-tabling and scheduling. Provably optimal solutions may be desirable for commercial and defense applications. Additionally, for applications such as register allocation and code optimization, naturally-occurring instances of graph coloring are often small and can be solved optimally. A recent wave of improvements in algorithms for Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests generic problem-reduction methods, rather than problem-specific heuristics, because (1) heuristics may be upset by new constraints, (2) heuristics tend to ignore structure, and (3) many relevant problems are provably inapproximable. Problem reductions often lead to highly symmetric SAT instances, and symmetries are known to slow down SAT solvers. In this work, we compare several avenues for symmetry breaking, in particular when certain kinds of symmetry are present in all generated instances. Our focus on reducing CSPs to SAT allows us to leverage recent dramatic improvement in SAT solvers and automatically benefit from future progress. We can use a variety of black-box SAT solvers without modifying their source code because our symmetry-breaking techniques are static, i.e., we detect symmetries and add symmetry breaking predicates (SBPs) during pre-processing. An important result of our work is that among the types of instance-independent SBPs we studied and their combinations, the simplest and least complete constructions are the most effective. Our experiments also clearly indicate that instance-independent symmetries should mostly be processed together with instance-specific symmetries rather than at the specification level, contrary to what has been suggested in the literature

    An extensive English language bibliography on graph theory and its applications, supplement 1

    Get PDF
    Graph theory and its applications - bibliography, supplement

    Increasing and Decreasing Sequences of Length Two in 01-Fillings of Moon Polyominoes

    Full text link
    We put recent results on the symmetry of the joint distribution of the numbers of crossings and nestings of two edges over matchings, set partitions and linked partitions, in the larger context of the enumeration of increasing and decreasing chains of length 2 in fillings of moon polyominoes.Comment: It is a updated version of a preprint entitled "On the symmetry of ascents and descents over 01-fillings of moon polyominoes". 19 page

    Quantum Algorithms for Graph Coloring and other Partitioning, Covering, and Packing Problems

    Full text link
    Let U be a universe on n elements, let k be a positive integer, and let F be a family of (implicitly defined) subsets of U. We consider the problems of partitioning U into k sets from F, covering U with k sets from F, and packing k non-intersecting sets from F into U. Classically, these problems can be solved via inclusion-exclusion in O*(2^n) time [BjorklundHK09]. Quantumly, there are faster algorithms for graph coloring with running time O(1.9140^n) [ShimizuM22] and for Set Cover with a small number of sets with running time O(1.7274^n |F|^O(1)) [AmbainisBIKPV19]. In this paper, we give a quantum speedup for Set Partition, Set Cover, and Set Packing whenever there is a classical enumeration algorithm that lends itself to a quadratic quantum speedup, which, for any subinstance on a subset X of U, enumerates at least one member of a k-partition, k-cover, or k-packing (if one exists) restricted to (or projected onto, in the case of k-cover) the set X in O*(c^{|X|}) time with c<2. Our bounded-error quantum algorithm runs in O*((2+c)^(n/2)) for Set Partition, Set Cover, and Set Packing. When c<=1.147899, our algorithm is slightly faster than O*((2+c)^(n/2)); when c approaches 1, it matches the running time of [AmbainisBIKPV19] for Set Cover when |F| is subexponential in n. For Graph Coloring, we further improve the running time to O(1.7956^n) by leveraging faster algorithms for coloring with a small number of colors to better balance our divide-and-conquer steps. For Domatic Number, we obtain a O((2-\epsilon)^n) running time for some \epsilon>0

    Exact Algorithms via Multivariate Subroutines

    Get PDF
    We consider the family of Phi-Subset problems, where the input consists of an instance I of size N over a universe U_I of size n and the task is to check whether the universe contains a subset with property Phi (e.g., Phi could be the property of being a feedback vertex set for the input graph of size at most k). Our main tool is a simple randomized algorithm which solves Phi-Subset in time (1+b-(1/c))^n N^(O(1)), provided that there is an algorithm for the Phi-Extension problem with running time b^{n-|X|} c^k N^{O(1)}. Here, the input for Phi-Extension is an instance I of size N over a universe U_I of size n, a subset X subseteq U_I, and an integer k, and the task is to check whether there is a set Y with X subseteq Y subseteq U_I and |Y X| <= k with property Phi. We derandomize this algorithm at the cost of increasing the running time by a subexponential factor in n, and we adapt it to the enumeration setting where we need to enumerate all subsets of the universe with property Phi. This generalizes the results of Fomin et al. [STOC 2016] who proved the case where b=1. As case studies, we use these results to design faster deterministic algorithms for: - checking whether a graph has a feedback vertex set of size at most k - enumerating all minimal feedback vertex sets - enumerating all minimal vertex covers of size at most k, and - enumerating all minimal 3-hitting sets. We obtain these results by deriving new b^{n-|X|} c^k N^{O(1)}-time algorithms for the corresponding Phi-Extension problems (or enumeration variant). In some cases, this is done by adapting the analysis of an existing algorithm, or in other cases by designing a new algorithm. Our analyses are based on Measure and Conquer, but the value to minimize, 1+b-(1/c), is unconventional and requires non-convex optimization

    Development and \u3cem\u3eIn Silico\u3c/em\u3e Evaluation of Large-Scale Metabolite Identification Methods Using Functional Group Detection for Metabolomics

    Get PDF
    Large-scale identification of metabolites is key to elucidating and modeling metabolism at the systems level. Advances in metabolomics technologies, particularly ultra-high resolution mass spectrometry (MS) enable comprehensive and rapid analysis of metabolites. However, a significant barrier to meaningful data interpretation is the identification of a wide range of metabolites including unknowns and the determination of their role(s) in various metabolic networks. Chemoselective (CS) probes to tag metabolite functional groups combined with high mass accuracy provide additional structural constraints for metabolite identification and quantification. We have developed a novel algorithm, Chemically Aware Substructure Search (CASS) that efficiently detects functional groups within existing metabolite databases, allowing for combined molecular formula and functional group (from CS tagging) queries to aid in metabolite identification without a priori knowledge. Analysis of the isomeric compounds in both Human Metabolome Database (HMDB) and KEGG Ligand demonstrated a high percentage of isomeric molecular formulae (43 and 28%, respectively), indicating the necessity for techniques such as CS-tagging. Furthermore, these two databases have only moderate overlap in molecular formulae. Thus, it is prudent to use multiple databases in metabolite assignment, since each major metabolite database represents different portions of metabolism within the biosphere. In silico analysis of various CS-tagging strategies under different conditions for adduct formation demonstrate that combined FT-MS derived molecular formulae and CS-tagging can uniquely identify up to 71% of KEGG and 37% of the combined KEGG/HMDB database vs. 41 and 17%, respectively without adduct formation. This difference between database isomer disambiguation highlights the strength of CS-tagging for non-lipid metabolite identification. However, unique identification of complex lipids still needs additional information

    Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    Get PDF
    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways.In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present only when it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter αN\alpha\in\mathbb{N}. Nevertheless, here it is proved that the probability of requiring a value of α>k\alpha>k to obtain a solution for a random graph decreases exponentially: P(α>k)2(k+1)P(\alpha>k) \leq 2^{-(k+1)}, making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.Comment: Working pape

    Development of an automated aircraft subsystem architecture generation and analysis tool

    Get PDF
    Purpose – The purpose of this paper is to present a new computational framework to address future preliminary design needs for aircraft subsystems. The ability to investigate multiple candidate technologies forming subsystem architectures is enabled with the provision of automated architecture generation, analysis and optimization. Main focus lies with a demonstration of the frameworks workings, as well as the optimizers performance with a typical form of application problem. Design/methodology/approach – The core aspects involve a functional decomposition, coupled with a synergistic mission performance analysis on the aircraft, architecture and component levels. This may be followed by a complete enumeration of architectures, combined with a user defined technology filtering and concept ranking procedure. In addition, a hybrid heuristic optimizer, based on ant systems optimization and a genetic algorithm, is employed to produce optimal architectures in both component composition and design parameters. The optimizer is tested on a generic architecture design problem combined with modified Griewank and parabolic functions for the continuous space. Findings – Insights from the generalized application problem show consistent rediscovery of the optimal architectures with the optimizer, as compared to a full problem enumeration. In addition multi-objective optimization reveals a Pareto front with differences in component composition as well as continuous parameters. Research limitations/implications – This paper demonstrates the frameworks application on a generalized test problem only. Further publication will consider real engineering design problems. Originality/value – The paper addresses the need for future conceptual design methods of complex systems to consider a mixed concept space of both discrete and continuous nature via automated methods
    corecore