164 research outputs found
Long ties accelerate noisy threshold-based contagions
Network structure can affect when and how widely new ideas, products, and
behaviors are adopted. In widely-used models of biological contagion,
interventions that randomly rewire edges (generally making them "longer")
accelerate spread. However, there are other models relevant to social
contagion, such as those motivated by myopic best-response in games with
strategic complements, in which an individual's behavior is described by a
threshold number of adopting neighbors above which adoption occurs (i.e.,
complex contagions). Recent work has argued that highly clustered, rather than
random, networks facilitate spread of these complex contagions. Here we show
that minor modifications to this model, which make it more realistic, reverse
this result: we allow very rare below-threshold adoption, i.e., rarely adoption
occurs when there is only one adopting neighbor. To model the trade-off between
long and short edges we consider networks that are the union of cycle-power-
graphs and random graphs on nodes. Allowing adoptions below threshold to
occur with order probability along some "short" cycle edges is
enough to ensure that random rewiring accelerates spread. Simulations
illustrate the robustness of these results to other commonly-posited models for
noisy best-response behavior. Hypothetical interventions that randomly rewire
existing edges or add random edges (versus adding "short", triad-closing edges)
in hundreds of empirical social networks reduce time to spread. This revised
conclusion suggests that those wanting to increase spread should induce
formation of long ties, rather than triad-closing ties. More generally, this
highlights the importance of noise in game-theoretic analyses of behavior
Random Recursive Hypergraphs
Random recursive hypergraphs grow by adding, at each step, a vertex and an
edge formed by joining the new vertex to a randomly chosen existing edge. The
model is parameter-free, and several characteristics of emerging hypergraphs
admit neat expressions via harmonic numbers, Bernoulli numbers, Eulerian
numbers, and Stirling numbers of the first kind. Natural deformations of random
recursive hypergraphs give rise to fascinating models of growing random
hypergraphs.Comment: 13 pages, 1 figure; v3: minor updates, references adde
Pattern Recognition and Event Reconstruction in Particle Physics Experiments
This report reviews methods of pattern recognition and event reconstruction
used in modern high energy physics experiments. After a brief introduction into
general concepts of particle detectors and statistical evaluation, different
approaches in global and local methods of track pattern recognition are
reviewed with their typical strengths and shortcomings. The emphasis is then
moved to methods which estimate the particle properties from the signals which
pattern recognition has associated. Finally, the global reconstruction of the
event is briefly addressed.Comment: 101 pages, 58 figure
Recommended from our members
Modelling the evolution of biological complexity with a two-dimensional lattice self-assembly process
Self-assembling systems are prevalent across numerous scales of nature, lying at the heart of diverse physical and biological phenomena.
Individual protein subunits self-assembling into complexes is often a vital first step of biological processes.
Errors during protein assembly, due to mutations or misfolds, can have devastating effects and are responsible for an assortment of protein diseases, known as proteopathies.
With proteins exhibiting endless layers of complexity, building any all-encompassing model is unrealistic.
Coarse-grained models, despite not faithfully capturing every detail of the original system, have massive potential to assist understanding complex phenomenon.
A principal actor in self-assembly is the binding interactions between subunits, and so geometric constraints, polarity, kinetic forces, etc. can often be marginalised.
This work explores how self-assembly and its outcomes are inextricably tied to the involved interactions through the use of a two-dimensional lattice polyomino model.
%Armed with this tractable model, we can probe how dynamics acting on evolution are reflected in interaction properties.
First, this thesis addresses how the interaction characteristics of self-assembly building blocks determine what structures they form.
Specifically, if the same structures are consistently produced and remain finite in size.
Assembly graphs store subunit interaction information and are used in classifying these two properties, the determinism and boundedness respectively.
Arbitrary sets of building blocks are classified without the costly overhead of repeated stochastic assembling, improving both the analysis speed and accuracy.
Furthermore, assembly graphs naturally integrate combinatorial and graph techniques, enabling a wider range of future polyomino studies.
The second part narrows in on implications of nondeterministic assembly on interaction strength evolution.
Generalising subunit binding sites with mutable binary strings introduces such interaction strengths into the polyomino model.
Deterministic assemblies obey analytic expectations.
Conversely, interactions in nondeterministic assemblies rapidly diverge from equilibrium to minimise assembly inconsistency.
Optimal interaction strengths during assembly are also reflected in evolution.
Transitions between certain polyominoes are strongly forbidden when interaction strengths are misaligned.
The third aspect focuses on genetic duplication, an evolutionary event observed in organisms across all taxa.
Through polyomino evolutions, a duplication-heteromerisation pathway emerges as an efficient process.
This pathway exploits the advantages of both self-interactions and pairwise-interactions, and accelerates evolution by avoiding complexity bottlenecks.
Several simulation predictions are successfully validated against a large data set of protein complexes.
These results focus on coarse-grained models rather than quantified biological insight.
Despite this, they reinforce existing observations of protein complexes, as well as posing several new mechanisms for the evolution of biological complexity
Network security
In a variety of settings, some payoff-relevant item spreads along a network of connected individuals. In some cases, the item will benefit those who receive it (for example, a music download, a stock tip, news about a new research funding source, etc.) while in other cases the impact may be negative (for example, viruses, both biological and electronic, financial contagion, and so on). Often, good and bad items may propagate along the same networks, so individuals must weigh the costs and benefits of being more or less connected to the network. The situation becomes more complicated (and more interesting) if individuals can also put effort into security, where security can be thought of as a screening technology that allows an individual to keep getting the benefits of network connectivity while blocking out the bad items. Drawing on the network literatures in economics, epidemiology, and applied math, we formulate a model of network security that can be used to study individual incentives to expand and secure networks and characterize properties of a symmetric equilibrium.social networks; network security; network robustness; contagion; random graphs
Determinantal Point Processes for Coresets
International audienceWhen one is faced with a dataset too large to be used all at once, an obvious solution is to retain only part of it. In practice this takes a wide variety of different forms, but among them " coresets " are especially appealing. A coreset is a (small) weighted sample of the original data that comes with a guarantee: that a cost function can be evaluated on the smaller set instead of the larger one, with low relative error. For some classes of problems, and via a careful choice of sampling distribution, iid random sampling has turned to be one of the most successful methods to build coresets efficiently. However, independent samples are sometimes overly redundant, and one could hope that enforcing diversity would lead to better performance. The difficulty lies in proving coreset properties in non-iid samples. We show that the coreset property holds for samples formed with determinantal point processes (DPP). DPPs are interesting because they are a rare example of repulsive point processes with tractable theoretical properties, enabling us to construct general coreset theorems. We apply our results to the k-means problem, and give empirical evidence of the superior performance of DPP samples over state of the art methods
Higher-order components dictate higher-order dynamics in hypergraphs
The presence of the giant component is a necessary condition for the
emergence of collective behavior in complex networked systems. Unlike networks,
hypergraphs have an important native feature that components of hypergraphs
might be of higher order, which could be defined in terms of the number of
common nodes shared between hyperedges. Although the extensive higher-order
component (HOC) could be witnessed ubiquitously in real-world hypergraphs, the
role of the giant HOC in collective behavior on hypergraphs has yet to be
elucidated. In this Letter, we demonstrate that the presence of the giant HOC
fundamentally alters the outbreak patterns of higher-order contagion dynamics
on real-world hypergraphs. Most crucially, the giant HOC is required for the
higher-order contagion to invade globally from a single seed. We confirm it by
using synthetic random hypergraphs containing adjustable and analytically
calculable giant HOC.Comment: Main: 6 pages, 4 figures. Supplementary Material: 7 pages, 7 figure
Content Sharing in Mobile Networks with Infrastructure: Planning and Management
This thesis focuses on mobile ad-hoc networks (with pedestrian or vehicular mobility) having infrastructure support. We deal with the problems of design, deployment and management of such networks. A first issue to address concerns infrastructure itself: how pervasive should it be in order for the network to operate at the same time efficiently and in a cost-effective manner? How should the units composing it (e.g., access points) be placed? There are several approaches to such questions in literature, and this thesis studies and compares them. Furthermore, in order to effectively design the infrastructure, we need to understand how and how much it will be used. As an example, what is the relationship between infrastructure-to-node and node-to-node communication? How far away, in time and space, do data travel before its destination is reached? A common assumption made when dealing with such problems is that perfect knowledge about the current and future node mobility is available. In this thesis, we also deal with the problem of assessing the impact that an imperfect, limited knowledge has on network performance. As far as the management of the network is concerned, this thesis presents a variant of the paradigm known as publish-and-subscribe. With respect to the original paradigm, our goal was to ensure a high probability of finding the requested content, even in presence of selfish, uncooperative nodes, or even nodes whose precise goal is harming the system. Each node is allowed to get from the network an amount of content which corresponds to the amount of content provided to other nodes. Nodes with caching capabilities are assisted in using their cache in order to improve the amount of offered conten
- …