20 research outputs found

    Computing a Knot Invariant as a Constraint Satisfaction Problem

    Full text link
    We point out the connection between mathematical knot theory and spin glass/search problem. In particular, we present a statistical mechanical formulation of the problem of computing a knot invariant; p-colorability problem, which provides an algorithm to find the solution. The method also allows one to get some deeper insight into the structural complexity of knots, which is expected to be related with the landscape structure of constraint satisfaction problem.Comment: 6 pages, 3 figures, submitted to short note in Journal of Physical Society of Japa

    Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model

    Get PDF
    Understanding the reasons for the success of deep neural networks trained using stochastic gradient-based methods is a key open problem for the nascent theory of deep learning. The types of data where these networks are most successful, such as images or sequences of speech, are characterized by intricate correlations. Yet, most theoretical work on neural networks does not explicitly model training data or assumes that elements of each data sample are drawn independently from some factorized probability distribution. These approaches are, thus, by construction blind to the correlation structure of real-world datasets and their impact on learning in neural networks. Here, we introduce a generative model for structured datasets that we call the hidden manifold model. The idea is to construct high-dimensional inputs that lie on a lower-dimensional manifold, with labels that depend only on their position within this manifold, akin to a single-layer decoder or generator in a generative adversarial network. We demonstrate that learning of the hidden manifold model is amenable to an analytical treatment by proving a "Gaussian equivalence property"(GEP), and we use the GEP to show how the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent is captured by a set of integro-differential equations that track the performance of the network at all times. This approach permits us to analyze in detail how a neural network learns functions of increasing complexity during training, how its performance depends on its size, and how it is impacted by parameters such as the learning rate or the dimension of the hidden manifold

    Compressed sensing with l0-norm: statistical physics analysis and algorithms for signal recovery

    Full text link
    Noiseless compressive sensing is a protocol that enables undersampling and later recovery of a signal without loss of information. This compression is possible because the signal is usually sufficiently sparse in a given basis. Currently, the algorithm offering the best tradeoff between compression rate, robustness, and speed for compressive sensing is the LASSO (l1-norm bias) algorithm. However, many studies have pointed out the possibility that the implementation of lp-norms biases, with p smaller than one, could give better performance while sacrificing convexity. In this work, we focus specifically on the extreme case of the l0-based reconstruction, a task that is complicated by the discontinuity of the loss. In the first part of the paper, we describe via statistical physics methods, and in particular the replica method, how the solutions to this optimization problem are arranged in a clustered structure. We observe two distinct regimes: one at low compression rate where the signal can be recovered exactly, and one at high compression rate where the signal cannot be recovered accurately. In the second part, we present two message-passing algorithms based on our first results for the l0-norm optimization problem. The proposed algorithms are able to recover the signal at compression rates higher than the ones achieved by LASSO while being computationally efficient

    Marvels and Pitfalls of the Langevin Algorithm in Noisy High-Dimensional Inference

    Get PDF
    Gradient-descent-based algorithms and their stochastic versions have widespread applications in machine learning and statistical inference. In this work, we carry out an analytic study of the performance of the algorithm most commonly considered in physics, the Langevin algorithm, in the context of noisy high-dimensional inference. We employ the Langevin algorithm to sample the posterior probability measure for the spiked mixed matrix-tensor model. The typical behavior of this algorithm is described by a system of integrodifferential equations that we call the Langevin state evolution, whose solution is compared with the one of the state evolution of approximate message passing (AMP). Our results show that, remarkably, the algorithmic threshold of the Langevin algorithm is suboptimal with respect to the one given by AMP. This phenomenon is due to the residual glassiness present in that region of parameters. We also present a simple heuristic expression of the transition line, which appears to be in agreement with the numerical results

    Ground-state configuration space heterogeneity of random finite-connectivity spin glasses and random constraint satisfaction problems

    Full text link
    We demonstrate through two case studies, one on the p-spin interaction model and the other on the random K-satisfiability problem, that a heterogeneity transition occurs to the ground-state configuration space of a random finite-connectivity spin glass system at certain critical value of the constraint density. At the transition point, exponentially many configuration communities emerge from the ground-state configuration space, making the entropy density s(q) of configuration-pairs a non-concave function of configuration-pair overlap q. Each configuration community is a collection of relatively similar configurations and it forms a stable thermodynamic phase in the presence of a suitable external field. We calculate s(q) by the replica-symmetric and the first-step replica-symmetry-broken cavity methods, and show by simulations that the configuration space heterogeneity leads to dynamical heterogeneity of particle diffusion processes because of the entropic trapping effect of configuration communities. This work clarifies the fine structure of the ground-state configuration space of random spin glass models, it also sheds light on the glassy behavior of hard-sphere colloidal systems at relatively high particle volume fraction.Comment: 26 pages, 9 figures, submitted to Journal of Statistical Mechanic

    On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms

    Full text link
    We introduce a version of the cavity method for diluted mean-field spin models that allows the computation of thermodynamic quantities similar to the Franz-Parisi quenched potential in sparse random graph models. This method is developed in the particular case of partially decimated random constraint satisfaction problems. This allows to develop a theoretical understanding of a class of algorithms for solving constraint satisfaction problems, in which elementary degrees of freedom are sequentially assigned according to the results of a message passing procedure (belief-propagation). We confront this theoretical analysis to the results of extensive numerical simulations.Comment: 32 pages, 24 figure

    The T=0 random-field Ising model on a Bethe lattice with large coordination number: hysteresis and metastable states

    Full text link
    In order to elucidate the relationship between rate-independent hysteresis and metastability in disordered systems driven by an external field, we study the Gaussian RFIM at T=0 on regular random graphs (Bethe lattice) of finite connectivity z and compute to O(1/z) (i.e. beyond mean-field) the quenched complexity associated with the one-spin-flip stable states with magnetization m as a function of the magnetic field H. When the saturation hysteresis loop is smooth in the thermodynamic limit, we find that it coincides with the envelope of the typical metastable states (the quenched complexity vanishes exactly along the loop and is positive everywhere inside). On the other hand, the occurence of a jump discontinuity in the loop (associated with an infinite avalanche) can be traced back to the existence of a gap in the magnetization of the metastable states for a range of applied field, and the envelope of the typical metastable states is then reentrant. These findings confirm and complete earlier analytical and numerical studies.Comment: 29 pages, 9 figure

    Threshold Saturation in Spatially Coupled Constraint Satisfaction Problems

    Full text link
    We consider chains of random constraint satisfaction models that are spatially coupled across a finite window along the chain direction. We investigate their phase diagram at zero temperature using the survey propagation formalism and the interpolation method. We prove that the SAT-UNSAT phase transition threshold of an infinite chain is identical to the one of the individual standard model, and is therefore not affected by spatial coupling. We compute the survey propagation complexity using population dynamics as well as large degree approximations, and determine the survey propagation threshold. We find that a clustering phase survives coupling. However, as one increases the range of the coupling window, the survey propagation threshold increases and saturates towards the phase transition threshold. We also briefly discuss other aspects of the problem. Namely, the condensation threshold is not affected by coupling, but the dynamic threshold displays saturation towards the condensation one. All these features may provide a new avenue for obtaining better provable algorithmic lower bounds on phase transition thresholds of the individual standard model

    Clusters of solutions and replica symmetry breaking in random k-satisfiability

    Full text link
    We study the set of solutions of random k-satisfiability formulae through the cavity method. It is known that, for an interval of the clause-to-variables ratio, this decomposes into an exponential number of pure states (clusters). We refine substantially this picture by: (i) determining the precise location of the clustering transition; (ii) uncovering a second `condensation' phase transition in the structure of the solution set for k larger or equal than 4. These results both follow from computing the large deviation rate of the internal entropy of pure states. From a technical point of view our main contributions are a simplified version of the cavity formalism for special values of the Parisi replica symmetry breaking parameter m (in particular for m=1 via a correspondence with the tree reconstruction problem) and new large-k expansions.Comment: 30 pages, 14 figures, typos corrected, discussion of appendix C expanded with a new figur
    corecore