5,298 research outputs found
Maximizing Activity in Ising Networks via the TAP Approximation
A wide array of complex biological, social, and physical systems have
recently been shown to be quantitatively described by Ising models, which lie
at the intersection of statistical physics and machine learning. Here, we study
the fundamental question of how to optimize the state of a networked Ising
system given a budget of external influence. In the continuous setting where
one can tune the influence applied to each node, we propose a series of
approximate gradient ascent algorithms based on the Plefka expansion, which
generalizes the na\"{i}ve mean field and TAP approximations. In the discrete
setting where one chooses a small set of influential nodes, the problem is
equivalent to the famous influence maximization problem in social networks with
an additional stochastic noise term. In this case, we provide sufficient
conditions for when the objective is submodular, allowing a greedy algorithm to
achieve an approximation ratio of . Additionally, we compare the
Ising-based algorithms with traditional influence maximization algorithms,
demonstrating the practical importance of accurately modeling stochastic
fluctuations in the system
Neural network setups for a precise detection of the many-body localization transition: finite-size scaling and limitations
Determining phase diagrams and phase transitions semi-automatically using
machine learning has received a lot of attention recently, with results in good
agreement with more conventional approaches in most cases. When it comes to
more quantitative predictions, such as the identification of universality class
or precise determination of critical points, the task is more challenging. As
an exacting test-bed, we study the Heisenberg spin-1/2 chain in a random
external field that is known to display a transition from a many-body localized
to a thermalizing regime, which nature is not entirely characterized. We
introduce different neural network structures and dataset setups to achieve a
finite-size scaling analysis with the least possible physical bias (no assumed
knowledge on the phase transition and directly inputing wave-function
coefficients), using state-of-the-art input data simulating chains of sizes up
to L=24. In particular, we use domain adversarial techniques to ensure that the
network learns scale-invariant features. We find a variability of the output
results with respect to network and training parameters, resulting in
relatively large uncertainties on final estimates of critical point and
correlation length exponent which tend to be larger than the values obtained
from conventional approaches. We put the emphasis on interpretability
throughout the paper and discuss what the network appears to learn for the
various used architectures. Our findings show that a it quantitative analysis
of phase transitions of unknown nature remains a difficult task with neural
networks when using the minimally engineered physical input.Comment: v2: published versio
- âŠ