189 research outputs found
Conditional Reliability in Uncertain Graphs
Network reliability is a well-studied problem that requires to measure the
probability that a target node is reachable from a source node in a
probabilistic (or uncertain) graph, i.e., a graph where every edge is assigned
a probability of existence. Many approaches and problem variants have been
considered in the literature, all assuming that edge-existence probabilities
are fixed. Nevertheless, in real-world graphs, edge probabilities typically
depend on external conditions. In metabolic networks a protein can be converted
into another protein with some probability depending on the presence of certain
enzymes. In social influence networks the probability that a tweet of some user
will be re-tweeted by her followers depends on whether the tweet contains
specific hashtags. In transportation networks the probability that a network
segment will work properly or not might depend on external conditions such as
weather or time of the day. In this paper we overcome this limitation and focus
on conditional reliability, that is assessing reliability when edge-existence
probabilities depend on a set of conditions. In particular, we study the
problem of determining the k conditions that maximize the reliability between
two nodes. We deeply characterize our problem and show that, even employing
polynomial-time reliability-estimation methods, it is NP-hard, does not admit
any PTAS, and the underlying objective function is non-submodular. We then
devise a practical method that targets both accuracy and efficiency. We also
study natural generalizations of the problem with multiple source and target
nodes. An extensive empirical evaluation on several large, real-life graphs
demonstrates effectiveness and scalability of the proposed methods.Comment: 14 pages, 13 figure
Core Decomposition in Multilayer Networks: Theory, Algorithms, and Applications
Multilayer networks are a powerful paradigm to model complex systems, where
multiple relations occur between the same entities. Despite the keen interest
in a variety of tasks, algorithms, and analyses in this type of network, the
problem of extracting dense subgraphs has remained largely unexplored so far.
In this work we study the problem of core decomposition of a multilayer
network. The multilayer context is much challenging as no total order exists
among multilayer cores; rather, they form a lattice whose size is exponential
in the number of layers. In this setting we devise three algorithms which
differ in the way they visit the core lattice and in their pruning techniques.
We then move a step forward and study the problem of extracting the
inner-most (also known as maximal) cores, i.e., the cores that are not
dominated by any other core in terms of their core index in all the layers.
Inner-most cores are typically orders of magnitude less than all the cores.
Motivated by this, we devise an algorithm that effectively exploits the
maximality property and extracts inner-most cores directly, without first
computing a complete decomposition.
Finally, we showcase the multilayer core-decomposition tool in a variety of
scenarios and problems. We start by considering the problem of densest-subgraph
extraction in multilayer networks. We introduce a definition of multilayer
densest subgraph that trades-off between high density and number of layers in
which the high density holds, and exploit multilayer core decomposition to
approximate this problem with quality guarantees. As further applications, we
show how to utilize multilayer core decomposition to speed-up the extraction of
frequent cross-graph quasi-cliques and to generalize the community-search
problem to the multilayer setting
Injecting Uncertainty in Graphs for Identity Obfuscation
Data collected nowadays by social-networking applications create fascinating
opportunities for building novel services, as well as expanding our
understanding about social structures and their dynamics. Unfortunately,
publishing social-network graphs is considered an ill-advised practice due to
privacy concerns. To alleviate this problem, several anonymization methods have
been proposed, aiming at reducing the risk of a privacy breach on the published
data, while still allowing to analyze them and draw relevant conclusions. In
this paper we introduce a new anonymization approach that is based on injecting
uncertainty in social graphs and publishing the resulting uncertain graphs.
While existing approaches obfuscate graph data by adding or removing edges
entirely, we propose using a finer-grained perturbation that adds or removes
edges partially: this way we can achieve the same desired level of obfuscation
with smaller changes in the data, thus maintaining higher utility. Our
experiments on real-world networks confirm that at the same level of identity
obfuscation our method provides higher usefulness than existing randomized
methods that publish standard graphs.Comment: VLDB201
Scalable Online Betweenness Centrality in Evolving Graphs
Betweenness centrality is a classic measure that quantifies the importance of
a graph element (vertex or edge) according to the fraction of shortest paths
passing through it. This measure is notoriously expensive to compute, and the
best known algorithm runs in O(nm) time. The problems of efficiency and
scalability are exacerbated in a dynamic setting, where the input is an
evolving graph seen edge by edge, and the goal is to keep the betweenness
centrality up to date. In this paper we propose the first truly scalable
algorithm for online computation of betweenness centrality of both vertices and
edges in an evolving graph where new edges are added and existing edges are
removed. Our algorithm is carefully engineered with out-of-core techniques and
tailored for modern parallel stream processing engines that run on clusters of
shared-nothing commodity hardware. Hence, it is amenable to real-world
deployment. We experiment on graphs that are two orders of magnitude larger
than previous studies. Our method is able to keep the betweenness centrality
measures up to date online, i.e., the time to update the measures is smaller
than the inter-arrival time between two consecutive updates.Comment: 15 pages, 9 Figures, accepted for publication in IEEE Transactions on
Knowledge and Data Engineerin
Fast Shortest Path Distance Estimation in Large Networks
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications.
In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks.
We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random.
Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.Yahoo! Research (internship
Pyoverdine and proteases affect the response of pseudomonas aeruginosa to gallium in human serum
Gallium is an iron mimetic which has recently been repurposed as an antibacterial agent due to its capability to disrupt bacterial iron metabolism. In this study, the antibacterial activity of gallium nitrate [Ga(NO3)3] was investigated in complement-free human serum (HS) on 55 Pseudomonas aeruginosa clinical isolates from cystic fibrosis and non-cystic fibrosis patients. The susceptibility of P. aeruginosa to Ga(NO3)3 in HS was dependent on the bacterial ability to acquire iron from serum binding proteins (i.e., transferrin). The extent of serum protein degradation correlated well with P. aeruginosa growth in HS, while pyoverdine production did not. However, pyoverdine-deficient P. aeruginosa strains were unable to grow in HS and overcome iron restriction, albeit capable of releasing proteases. Predigestion of HS with proteinase K promoted the growth of all strains, irrespective of their ability to produce proteases and/or pyoverdine. The MICs of Ga(NO3)3 were higher in HS than in an iron-poor Casamino Acids medium, where proteolysis does not affect iron availability. Coherently, strains displaying high proteolytic activity were less susceptible to Ga(NO3)3 in HS. Our data support a model in which both pyoverdine and proteases affect the response of P. aeruginosa to Ga(NO3)3 in HS. The relatively high Ga(NO3)3 concentration required to inhibit the growth of highly proteolytic P. aeruginosa isolates in HS poses a limitation to the potential of Ga(NO3)3 in the treatment of P. aeruginosa bloodstream infections
- …