255 research outputs found

    Exactly solvable models of adaptive networks

    Full text link
    A satisfiability (SAT-UNSAT) transition takes place for many optimization problems when the number of constraints, graphically represented by links between variables nodes, is brought above some threshold. If the network of constraints is allowed to adapt by redistributing its links, the SAT-UNSAT transition may be delayed and preceded by an intermediate phase where the structure self-organizes to satisfy the constraints. We present an analytic approach, based on the recently introduced cavity method for large deviations, which exactly describes the two phase transitions delimiting this adaptive intermediate phase. We give explicit results for random bond models subject to the connectivity or rigidity percolation transitions, and compare them with numerical simulations.Comment: 4 pages, 4 figure

    Landscape of solutions in constraint satisfaction problems

    Get PDF
    We present a theoretical framework for characterizing the geometrical properties of the space of solutions in constraint satisfaction problems, together with practical algorithms for studying this structure on particular instances. We apply our method to the coloring problem, for which we obtain the total number of solutions and analyze in detail the distribution of distances between solutions.Comment: 4 pages, 4 figures. Replaced with published versio

    Statistical Mechanics of the Hyper Vertex Cover Problem

    Full text link
    We introduce and study a new optimization problem called Hyper Vertex Cover. This problem is a generalization of the standard vertex cover to hypergraphs: one seeks a configuration of particles with minimal density such that every hyperedge of the hypergraph contains at least one particle. It can also be used in important practical tasks, such as the Group Testing procedures where one wants to detect defective items in a large group by pool testing. Using a Statistical Mechanics approach based on the cavity method, we study the phase diagram of the HVC problem, in the case of random regualr hypergraphs. Depending on the values of the variables and tests degrees different situations can occur: The HVC problem can be either in a replica symmetric phase, or in a one-step replica symmetry breaking one. In these two cases, we give explicit results on the minimal density of particles, and the structure of the phase space. These problems are thus in some sense simpler than the original vertex cover problem, where the need for a full replica symmetry breaking has prevented the derivation of exact results so far. Finally, we show that decimation procedures based on the belief propagation and the survey propagation algorithms provide very efficient strategies to solve large individual instances of the hyper vertex cover problem.Comment: Submitted to PR

    The cavity method for large deviations

    Full text link
    A method is introduced for studying large deviations in the context of statistical physics of disordered systems. The approach, based on an extension of the cavity method to atypical realizations of the quenched disorder, allows us to compute exponentially small probabilities (rate functions) over different classes of random graphs. It is illustrated with two combinatorial optimization problems, the vertex-cover and coloring problems, for which the presence of replica symmetry breaking phases is taken into account. Applications include the analysis of models on adaptive graph structures.Comment: 18 pages, 7 figure

    Statistical mechanics of error exponents for error-correcting codes

    Full text link
    Error exponents characterize the exponential decay, when increasing message length, of the probability of error of many error-correcting codes. To tackle the long standing problem of computing them exactly, we introduce a general, thermodynamic, formalism that we illustrate with maximum-likelihood decoding of low-density parity-check (LDPC) codes on the binary erasure channel (BEC) and the binary symmetric channel (BSC). In this formalism, we apply the cavity method for large deviations to derive expressions for both the average and typical error exponents, which differ by the procedure used to select the codes from specified ensembles. When decreasing the noise intensity, we find that two phase transitions take place, at two different levels: a glass to ferromagnetic transition in the space of codewords, and a paramagnetic to glass transition in the space of codes.Comment: 32 pages, 13 figure

    Message passing for vertex covers

    Full text link
    Constructing a minimal vertex cover of a graph can be seen as a prototype for a combinatorial optimization problem under hard constraints. In this paper, we develop and analyze message passing techniques, namely warning and survey propagation, which serve as efficient heuristic algorithms for solving these computational hard problems. We show also, how previously obtained results on the typical-case behavior of vertex covers of random graphs can be recovered starting from the message passing equations, and how they can be extended.Comment: 25 pages, 9 figures - version accepted for publication in PR

    Random multi-index matching problems

    Full text link
    The multi-index matching problem (MIMP) generalizes the well known matching problem by going from pairs to d-uplets. We use the cavity method from statistical physics to analyze its properties when the costs of the d-uplets are random. At low temperatures we find for d>2 a frozen glassy phase with vanishing entropy. We also investigate some properties of small samples by enumerating the lowest cost matchings to compare with our theoretical predictions.Comment: 22 pages, 16 figure

    An algorithm for counting circuits: application to real-world and random graphs

    Full text link
    We introduce an algorithm which estimates the number of circuits in a graph as a function of their length. This approach provides analytical results for the typical entropy of circuits in sparse random graphs. When applied to real-world networks, it allows to estimate exponentially large numbers of circuits in polynomial time. We illustrate the method by studying a graph of the Internet structure.Comment: 7 pages, 3 figures, minor corrections, accepted versio

    HAMAP in 2015: updates to the protein family classification and annotation system

    Get PDF
    HAMAP (High-quality Automated and Manual Annotation of Proteins—available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorith

    HAMAP in 2013, new developments in the protein family classification and annotation system

    Get PDF
    HAMAP (High-quality Automated and Manual Annotation of Proteins—available at http://hamap.expasy.org/) is a system for the classification and annotation of protein sequences. It consists of a collection of manually curated family profiles for protein classification, and associated annotation rules that specify annotations that apply to family members. HAMAP was originally developed to support the manual curation of UniProtKB/Swiss-Prot records describing microbial proteins. Here we describe new developments in HAMAP, including the extension of HAMAP to eukaryotic proteins, the use of HAMAP in the automated annotation of UniProtKB/TrEMBL, providing high-quality annotation for millions of protein sequences, and the future integration of HAMAP into a unified system for UniProtKB annotation, UniRule. HAMAP is continuously updated by expert curators with new family profiles and annotation rules as new protein families are characterized. The collection of HAMAP family classification profiles and annotation rules can be browsed and viewed on the HAMAP website, which also provides an interface to scan user sequences against HAMAP profile
    • …
    corecore