825 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Local geometry of NAE-SAT solutions in the condensation regime

    Full text link
    The local behavior of typical solutions of random constraint satisfaction problems (CSP) describes many important phenomena including clustering thresholds, decay of correlations, and the behavior of message passing algorithms. When the constraint density is low, studying the planted model is a powerful technique for determining this local behavior which in many examples has a simple Markovian structure. Work of Coja-Oghlan, Kapetanopoulos, Muller (2020) showed that for a wide class of models, this description applies up to the so-called condensation threshold. Understanding the local behavior after the condensation threshold is more complex due to long-range correlations. In this work, we revisit the random regular NAE-SAT model in the condensation regime and determine the local weak limit which describes a random solution around a typical variable. This limit exhibits a complicated non-Markovian structure arising from the space of solutions being dominated by a small number of large clusters, a result rigorously verified by Nam, Sly, Sohn (2021). This is the first characterization of the local weak limit in the condensation regime for any sparse random CSPs in the so-called one-step replica symmetry breaking (1RSB) class. Our result is non-asymptotic, and characterizes the tight fluctuation O(n1/2)O(n^{-1/2}) around the limit. Our proof is based on coupling the local neighborhoods of an infinite spin system, which encodes the structure of the clusters, to a broadcast model on trees whose channel is given by the 1RSB belief-propagation fixed point. We believe that our proof technique has broad applicability to random CSPs in the 1RSB class.Comment: 43 pages, 2 figure

    Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

    Full text link
    Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition's width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth (sd). Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. Precisely, we prove that on nn-vertex graphs equipped with a tree-model (a decomposition notion underlying sd) of depth dd and using kk labels, we can solve - Independent Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using O(dk2logn)O(dk^2\log n) space; - Max Cut in time nO(dk)n^{O(dk)} using O(dklogn)O(dk\log n) space; and - Dominating Set in time 2O(dk)nO(1)2^{O(dk)}\cdot n^{O(1)} using nO(1)n^{O(1)} space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of IS the exponent of the parametric factor in the time complexity has to grow with dd if one wishes to keep the space complexity polynomial.Comment: Conference version to appear at the European Symposium on Algorithms (ESA 2023

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Finding a Maximum Restricted tt-Matching via Boolean Edge-CSP

    Full text link
    The problem of finding a maximum 22-matching without short cycles has received significant attention due to its relevance to the Hamilton cycle problem. This problem is generalized to finding a maximum tt-matching which excludes specified complete tt-partite subgraphs, where tt is a fixed positive integer. The polynomial solvability of this generalized problem remains an open question. In this paper, we present polynomial-time algorithms for the following two cases of this problem: in the first case the forbidden complete tt-partite subgraphs are edge-disjoint; and in the second case the maximum degree of the input graph is at most 2t12t-1. Our result for the first case extends the previous work of Nam (1994) showing the polynomial solvability of the problem of finding a maximum 22-matching without cycles of length four, where the cycles of length four are vertex-disjoint. The second result expands upon the works of B\'{e}rczi and V\'{e}gh (2010) and Kobayashi and Yin (2012), which focused on graphs with maximum degree at most t+1t+1. Our algorithms are obtained from exploiting the discrete structure of restricted tt-matchings and employing an algorithm for the Boolean edge-CSP.Comment: 20 pages, 2 figure

    A Dichotomy for Succinct Representations of Homomorphisms

    Get PDF

    Crystal structure prediction for multicomponent systems: energy models and structure generation

    Get PDF
    Crystalline materials have a wide application in the pharmaceutical and agrochemical sectors. The aim of Crystal Structure Prediction (CSP) is to conduct polymorph screening by predicting all possible polymorphs given the chemical diagram of a compound. Various computational programmes have been developed for both academic and industrial use. However, the application to hydrate systems remains challenging. The aim of this thesis is to explore and improve the applicability of CSP for hydrates. In this thesis, I first examined the applicability of a current lattice energy model for hydrates: This model consists of anisotropic distributed multipole moments derived from isolated-molecule quantum mechanical calculations to model the electrostatic interactions, combined with isotropic atom-atom exp-6 Buckingham potential along with empirical pa- rameters to model repulsion and dispersion interactions. It has been shown to be successful in determining the low-energy structures of small organic crystals. By giving 107 exper- imental hydrates extracted from the Cambridge Structural Database as starting points, I found that the energy model is able to reproduce around 95% of the structural geometry with different quantum mechanical levels of theory. The relative stability ordering based on the lattice energy for computed structures was, however, not always satisfactory and varies with the level of theory adopted. The energy model also revealed an underestima- tion of the binding energy for hydrate and hydrogen-bonding systems. The accuracy of our current energy model was insufficient for modelling crystals with complex short-range interactions, especially hydrogen bonds. I postulated that this can be addressed with the inclusion of an explicit induction energy correction in the model. Hence I examined the use of the isolated-molecule assumption and the polarisable con- tinuum model (PCM) corrections within hydrate prediction. The electrostatics derived from ab initio molecular charge densities in the gas phase are replaced by simulations within a field of the surrounding molecules represented by point charges. Distributed multipolar representation of the electron density perturbation was applied in the classi- cal polarisation model for the evaluation of the induction energy. The integration of this process for modelling induction into a current CSP methodology was achieved. The im- plementation was based on the recently developed lattice energy minimisation programme known as Crystal Structure Optimizer – Rigid Molecules (CSO-RM) for rigid-body sys- tems, and its companion Crystal Structure Optimizer – Flexible Molecules (CSO-FM) to account for conformational flexibility. I assessed the energy rankings of experimental matches before and after induction corrections for three small organic hydrate systems, namely 2,6-diamino-4(3H)-pyrimidinone, gallic acid and theophylline, as well as demon- strating the importance of induction in the carbamazepine and diglycine crystals. The contribution to the lattice energy from explicit induction term was generally found to favour hydrogen-bonding systems, and has been found to result in significant improvement among polymorphic/computed forms. Another aspect of this work focused on improving the global search efficiency of the initial structure generation. I modified the current methodology, which suffers the frequent occurrence of molecular overlaps. The modification could increase the initial structure generation speed by to four times while preserving the quality of structures generated.Open Acces

    Generalising weighted model counting

    Get PDF
    Given a formula in propositional or (finite-domain) first-order logic and some non-negative weights, weighted model counting (WMC) is a function problem that asks to compute the sum of the weights of the models of the formula. Originally used as a flexible way of performing probabilistic inference on graphical models, WMC has found many applications across artificial intelligence (AI), machine learning, and other domains. Areas of AI that rely on WMC include explainable AI, neural-symbolic AI, probabilistic programming, and statistical relational AI. WMC also has applications in bioinformatics, data mining, natural language processing, prognostics, and robotics. In this work, we are interested in revisiting the foundations of WMC and considering generalisations of some of the key definitions in the interest of conceptual clarity and practical efficiency. We begin by developing a measure-theoretic perspective on WMC, which suggests a new and more general way of defining the weights of an instance. This new representation can be as succinct as standard WMC but can also expand as needed to represent less-structured probability distributions. We demonstrate the performance benefits of the new format by developing a novel WMC encoding for Bayesian networks. We then show how existing WMC encodings for Bayesian networks can be transformed into this more general format and what conditions ensure that the transformation is correct (i.e., preserves the answer). Combining the strengths of the more flexible representation with the tricks used in existing encodings yields further efficiency improvements in Bayesian network probabilistic inference. Next, we turn our attention to the first-order setting. Here, we argue that the capabilities of practical model counting algorithms are severely limited by their inability to perform arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits (used to express a solution to a model counting problem) to graphs that are allowed to have cycles. These improvements enable us to find efficient solutions to counting fundamental structures such as injections and bijections that were previously unsolvable by any available algorithm. The second strand of this work is concerned with synthetic data generation. Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, benchmarks are often limited and fail to reveal differences among the algorithms. First, we show how random instances of probabilistic logic programs (that typically use WMC algorithms for inference) can be generated using constraint programming. We also introduce a new constraint to control the independence structure of the underlying probability distribution and provide a combinatorial argument for the correctness of the constraint model. This model allows us to, for the first time, experimentally investigate inference algorithms on more than just a handful of instances. Second, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We show that the easy-hard-easy pattern with respect to clause density is different for algorithms based on dynamic programming and algebraic decision diagrams than for all other solvers. We also demonstrate that all WMC algorithms scale exponentially with respect to primal treewidth, although at differing rates

    Parameterized Graph Modification Beyond the Natural Parameter

    Get PDF

    A Fine-Grained Classification of the Complexity of Evaluating the Tutte Polynomial on Integer Points Parameterized by Treewidth and Cutwidth

    Get PDF
    We give a fine-grained classification of evaluating the Tutte polynomial T(G;x,y) on all integer points on graphs with small treewidth and cutwidth. Specifically, we show for any point (x,y) ∈ ℤ² that either - T(G; x, y) can be computed in polynomial time, - T(G; x, y) can be computed in 2^O(tw) n^O(1) time, but not in 2^o(ctw) n^O(1) time assuming the Exponential Time Hypothesis (ETH), - T(G; x, y) can be computed in 2^O(tw log tw) n^O(1) time, but not in 2^o(ctw log ctw) n^O(1) time assuming the ETH, where we assume tree decompositions of treewidth tw and cutwidth decompositions of cutwidth ctw are given as input along with the input graph on n vertices and point (x,y). To obtain these results, we refine the existing reductions that were instrumental for the seminal dichotomy by Jaeger, Welsh and Vertigan [Math. Proc. Cambridge Philos. Soc'90]. One of our technical contributions is a new rank bound of a matrix that indicates whether the union of two forests is a forest itself, which we use to show that the number of forests of a graph can be counted in 2^O(tw) n^O(1) time
    corecore