132 research outputs found
Modeling algorithmic bias : simplicial complexes and evolving network topologies
Every day, people inform themselves and create their opinions on social networks. Although these platforms have promoted the access and dissemination of information, they may expose readers to manipulative, biased, and disinformative content-co-causes of polarization/radicalization. Moreover, recommendation algorithms, intended initially to enhance platform usage, are likely to augment such phenomena, generating the so-called Algorithmic Bias. In this work, we propose two extensions of the Algorithmic Bias model and analyze them on scale-free and Erdos-Renyi random network topologies. Our first extension introduces a mechanism of link rewiring so that the underlying structure co-evolves with the opinion dynamics, generating the Adaptive Algorithmic Bias model. The second one explicitly models a peer-pressure mechanism where a majority-if there is one-can attract a disagreeing individual, pushing them to conform. As a result, we observe that the co-evolution of opinions and network structure does not significantly impact the final state when the latter is much slower than the former. On the other hand, peer pressure enhances consensus mitigating the effects of both "close-mindedness" and algorithmic filtering
Information Gathering in Networks via Active Exploration
How should we gather information in a network, where each node's visibility
is limited to its local neighborhood? This problem arises in numerous
real-world applications, such as surveying and task routing in social networks,
team formation in collaborative networks and experimental design with
dependency constraints. Often the informativeness of a set of nodes can be
quantified via a submodular utility function. Existing approaches for
submodular optimization, however, require that the set of all nodes that can be
selected is known ahead of time, which is often unrealistic. In contrast, we
propose a novel model where we start our exploration from an initial node, and
new nodes become visible and available for selection only once one of their
neighbors has been chosen. We then present a general algorithm NetExp for this
problem, and provide theoretical bounds on its performance dependent on
structural properties of the underlying network. We evaluate our methodology on
various simulated problem instances as well as on data collected from social
question answering system deployed within a large enterprise.Comment: Longer version of IJCAI'15 pape
Local Treewidth of Random and Noisy Graphs with Applications to Stopping Contagion in Networks
We study the notion of local treewidth in sparse random graphs: the maximum treewidth over all k-vertex subgraphs of an n-vertex graph. When k is not too large, we give nearly tight bounds for this local treewidth parameter; we also derive nearly tight bounds for the local treewidth of noisy trees, trees where every non-edge is added independently with small probability. We apply our upper bounds on the local treewidth to obtain fixed parameter tractable algorithms (on random graphs and noisy trees) for edge-removal problems centered around containing a contagious process evolving over a network. In these problems, our main parameter of study is k, the number of initially "infected" vertices in the network. For the random graph models we consider and a certain range of parameters the running time of our algorithms on n-vertex graphs is 2^o(k) poly(n), improving upon the 2^?(k) poly(n) performance of the best-known algorithms designed for worst-case instances of these edge deletion problems
An Experimental Study of the Treewidth of Real-World Graph Data
Treewidth is a parameter that measures how tree-like a relational instance is, and whether it can reasonably be decomposed into a tree. Many computation tasks are known to be tractable on databases of small treewidth, but computing the treewidth of a given instance is intractable. This article is the first large-scale experimental study of treewidth and tree decompositions of real-world database instances (25 datasets from 8 different domains, with sizes ranging from a few thousand to a few million vertices). The goal is to determine which data, if any, can benefit of the wealth of algorithms for databases of small treewidth. For each dataset, we obtain upper and lower bound estimations of their treewidth, and study the properties of their tree decompositions. We show in particular that, even when treewidth is high, using partial tree decompositions can result in data structures that can assist algorithms
Threshold for Steiner triple systems
We prove that with high probability
contains a spanning Steiner triple system for ,
establishing the tight exponent for the threshold probability for existence of
a Steiner triple system. We also prove the analogous theorem for Latin squares.
Our result follows from a novel bootstrapping scheme that utilizes iterative
absorption as well as the connection between thresholds and fractional
expectation-thresholds established by Frankston, Kahn, Narayanan, and Park.Comment: 22 pages, 1 figur
- …