241 research outputs found
Inference of the sparse kinetic Ising model using the decimation method
In this paper we study the inference of the kinetic Ising model on sparse
graphs by the decimation method. The decimation method, which was first
proposed in [Phys. Rev. Lett. 112, 070603] for the static inverse Ising
problem, tries to recover the topology of the inferred system by setting the
weakest couplings to zero iteratively. During the decimation process the
likelihood function is maximized over the remaining couplings. Unlike the
-optimization based methods, the decimation method does not use the
Laplace distribution as a heuristic choice of prior to select a sparse
solution. In our case, the whole process can be done automatically without
fixing any parameters by hand. We show that in the dynamical inference problem,
where the task is to reconstruct the couplings of an Ising model given the
data, the decimation process can be applied naturally into a maximum-likelihood
optimization algorithm, as opposed to the static case where pseudo-likelihood
method needs to be adopted. We also use extensive numerical studies to validate
the accuracy of our methods in dynamical inference problems. Our results
illustrate that on various topologies and with different distribution of
couplings, the decimation method outperforms the widely-used -optimization based methods.Comment: 11 pages, 5 figure
Statistical Mechanics of maximal independent sets
The graph theoretic concept of maximal independent set arises in several
practical problems in computer science as well as in game theory. A maximal
independent set is defined by the set of occupied nodes that satisfy some
packing and covering constraints. It is known that finding minimum and
maximum-density maximal independent sets are hard optimization problems. In
this paper, we use cavity method of statistical physics and Monte Carlo
simulations to study the corresponding constraint satisfaction problem on
random graphs. We obtain the entropy of maximal independent sets within the
replica symmetric and one-step replica symmetry breaking frameworks, shedding
light on the metric structure of the landscape of solutions and suggesting a
class of possible algorithms. This is of particular relevance for the
application to the study of strategic interactions in social and economic
networks, where maximal independent sets correspond to pure Nash equilibria of
a graphical game of public goods allocation
Biased landscapes for random Constraint Satisfaction Problems
The typical complexity of Constraint Satisfaction Problems (CSPs) can be
investigated by means of random ensembles of instances. The latter exhibit many
threshold phenomena besides their satisfiability phase transition, in
particular a clustering or dynamic phase transition (related to the tree
reconstruction problem) at which their typical solutions shatter into
disconnected components. In this paper we study the evolution of this
phenomenon under a bias that breaks the uniformity among solutions of one CSP
instance, concentrating on the bicoloring of k-uniform random hypergraphs. We
show that for small k the clustering transition can be delayed in this way to
higher density of constraints, and that this strategy has a positive impact on
the performances of Simulated Annealing algorithms. We characterize the modest
gain that can be expected in the large k limit from the simple implementation
of the biasing idea studied here. This paper contains also a contribution of a
more methodological nature, made of a review and extension of the methods to
determine numerically the discontinuous dynamic transition threshold.Comment: 32 pages, 16 figure
Statistical Physics of Hard Optimization Problems
Optimization is fundamental in many areas of science, from computer science
and information theory to engineering and statistical physics, as well as to
biology or social sciences. It typically involves a large number of variables
and a cost function depending on these variables. Optimization problems in the
NP-complete class are particularly difficult, it is believed that the number of
operations required to minimize the cost function is in the most difficult
cases exponential in the system size. However, even in an NP-complete problem
the practically arising instances might, in fact, be easy to solve. The
principal question we address in this thesis is: How to recognize if an
NP-complete constraint satisfaction problem is typically hard and what are the
main reasons for this? We adopt approaches from the statistical physics of
disordered systems, in particular the cavity method developed originally to
describe glassy systems. We describe new properties of the space of solutions
in two of the most studied constraint satisfaction problems - random
satisfiability and random graph coloring. We suggest a relation between the
existence of the so-called frozen variables and the algorithmic hardness of a
problem. Based on these insights, we introduce a new class of problems which we
named "locked" constraint satisfaction, where the statistical description is
easily solvable, but from the algorithmic point of view they are even more
challenging than the canonical satisfiability.Comment: PhD thesi
Learning by message-passing in networks of discrete synapses
We show that a message-passing process allows to store in binary "material"
synapses a number of random patterns which almost saturates the information
theoretic bounds. We apply the learning algorithm to networks characterized by
a wide range of different connection topologies and of size comparable with
that of biological systems (e.g. ). The algorithm can be
turned into an on-line --fault tolerant-- learning protocol of potential
interest in modeling aspects of synaptic plasticity and in building
neuromorphic devices.Comment: 4 pages, 3 figures; references updated and minor corrections;
accepted in PR
- …