5,243 research outputs found
Statics and dynamics of selfish interactions in distributed service systems
We study a class of games which model the competition among agents to access
some service provided by distributed service units and which exhibit congestion
and frustration phenomena when service units have limited capacity. We propose
a technique, based on the cavity method of statistical physics, to characterize
the full spectrum of Nash equilibria of the game. The analysis reveals a large
variety of equilibria, with very different statistical properties. Natural
selfish dynamics, such as best-response, usually tend to large-utility
equilibria, even though those of smaller utility are exponentially more
numerous. Interestingly, the latter actually can be reached by selecting the
initial conditions of the best-response dynamics close to the saturation limit
of the service unit capacities. We also study a more realistic stochastic
variant of the game by means of a simple and effective approximation of the
average over the random parameters, showing that the properties of the
average-case Nash equilibria are qualitatively similar to the deterministic
ones.Comment: 30 pages, 10 figure
Probing the Majorana nature of the neutrino with neutrinoless double beta decay
Neutrinoless double beta decay (NDBD) is the only experiment that could probe
the Majorana nature of the neutrino. Here we study the theoretical implications
of NDBD for models yielding tri-bimaximal lepton mixing like A4 and S4.Comment: Talk given at TAUP09, July 1-5, 2009 (Roma).The proceeding will be
published in Journal of Physics, Conference Series (Editors: E. Coccia, L.
Pandola, N. Fornengo, R. Aloisio
Light Higgs bosons from a strongly interacting Higgs sector
The mass and the decay width of a Higgs boson in the minimal standard model
are evaluated by a variational method in the limit of strong self-coupling
interaction. The non-perturbative technique provides an interpolation scheme
between strong-coupling regime and weak-coupling limit where the standard
perturbative results are recovered. In the strong-coupling limit the physical
mass and the decay width of the Higgs boson are found to be very small as a
consequence of mass renormalization. Thus it is argued that the eventual
detection of a light Higgs boson would not rule out the existence of a strongly
interacting Higgs sector.Comment: 2 figure
The Pion Structure Function in a Constituent Model
Using the recent relatively precise experimental results on the pion
structure function, obtained from Drell--Yan processes, we quantitatively test
an old model where the structure function of any hadron is determined by that
of its constituent quarks. In this model the pion structure function can be
predicted from the known nucleon structure function. We find that the data
support the model, at least as a good first approximation.Comment: 9 pages, 3 figure
Optimizing spread dynamics on graphs by message passing
Cascade processes are responsible for many important phenomena in natural and
social sciences. Simple models of irreversible dynamics on graphs, in which
nodes activate depending on the state of their neighbors, have been
successfully applied to describe cascades in a large variety of contexts. Over
the last decades, many efforts have been devoted to understand the typical
behaviour of the cascades arising from initial conditions extracted at random
from some given ensemble. However, the problem of optimizing the trajectory of
the system, i.e. of identifying appropriate initial conditions to maximize (or
minimize) the final number of active nodes, is still considered to be
practically intractable, with the only exception of models that satisfy a sort
of diminishing returns property called submodularity. Submodular models can be
approximately solved by means of greedy strategies, but by definition they lack
cooperative characteristics which are fundamental in many real systems. Here we
introduce an efficient algorithm based on statistical physics for the
optimization of trajectories in cascade processes on graphs. We show that for a
wide class of irreversible dynamics, even in the absence of submodularity, the
spread optimization problem can be solved efficiently on large networks.
Analytic and algorithmic results on random graphs are complemented by the
solution of the spread maximization problem on a real-world network (the
Epinions consumer reviews network).Comment: Replacement for "The Spread Optimization Problem
Electroweak Precision Tests: A Concise Review
1. Introduction 2. Status of the Data 3. Precision Electroweak Data and the
Standard Model 4. A More General Analysis of Electroweak Data
4.1 Basic Definitions and Results
4.2 Experimental Determination of the Epsilon Variables
4.3 Comparing the Data with the Minimal Supersymmetric Standard Model 5.
Theoretical Limits on the Higgs Mass 6. ConclusionComment: Submitted to Int. Journal of Modern Physics
Containing epidemic outbreaks by message-passing techniques
The problem of targeted network immunization can be defined as the one of
finding a subset of nodes in a network to immunize or vaccinate in order to
minimize a tradeoff between the cost of vaccination and the final (stationary)
expected infection under a given epidemic model. Although computing the
expected infection is a hard computational problem, simple and efficient
mean-field approximations have been put forward in the literature in recent
years. The optimization problem can be recast into a constrained one in which
the constraints enforce local mean-field equations describing the average
stationary state of the epidemic process. For a wide class of epidemic models,
including the susceptible-infected-removed and the
susceptible-infected-susceptible models, we define a message-passing approach
to network immunization that allows us to study the statistical properties of
epidemic outbreaks in the presence of immunized nodes as well as to find
(nearly) optimal immunization sets for a given choice of parameters and costs.
The algorithm scales linearly with the size of the graph and it can be made
efficient even on large networks. We compare its performance with topologically
based heuristics, greedy methods, and simulated annealing
Relationship between clustering and algorithmic phase transitions in the random k-XORSAT model and its NP-complete extensions
We study the performances of stochastic heuristic search algorithms on
Uniquely Extendible Constraint Satisfaction Problems with random inputs. We
show that, for any heuristic preserving the Poissonian nature of the underlying
instance, the (heuristic-dependent) largest ratio of constraints per
variables for which a search algorithm is likely to find solutions is smaller
than the critical ratio above which solutions are clustered and
highly correlated. In addition we show that the clustering ratio can be reached
when the number k of variables per constraints goes to infinity by the
so-called Generalized Unit Clause heuristic.Comment: 15 pages, 4 figures, Proceedings of the International Workshop on
Statistical-Mechanical Informatics, September 16-19, 2007, Kyoto, Japan; some
imprecisions in the previous version have been correcte
Ultraviolet Completion of Flavour Models
Effective Flavour Models do not address questions related to the nature of
the fundamental renormalisable theory at high energies. We study the
ultraviolet completion of Flavour Models, which in general has the advantage of
improving the predictivity of the effective models. In order to illustrate the
important features we provide minimal completions for two known A4 models. We
discuss the phenomenological implications of the explicit completions, such as
lepton flavour violating contributions that arise through the exchange of
messenger fields.Comment: 18 pages, 8 figure
Indication for Light Sneutrinos and Gauginos from Precision Electroweak Data
The present Standard Model fit of precision data has a low confidence level,
and is characterized by a few inconsistencies. We look for supersymmetric
effects that could improve the agreement among the electroweak precision
measurements and with the direct lower bound on the Higgs mass. We find that
this is the case particularly if the 3.6 sigma discrepancy between sin^2
theta_eff from leptonic and hadronic asymmetries is finally settled more on the
side of the leptonic ones. After the inclusion of all experimental constraints,
our analysis selects light sneutrinos, with masses in the range 55-80 GeV, and
charged sleptons with masses just above their experimental limit, possibly with
additional effects from light gauginos. The phenomenological implications of
this scenario are discussed.Comment: 17 pages LaTex, 9 figures, uses epsfi
- …
