204,594 research outputs found
Average-case analysis of dynamic graph algorithms
We present a model for edge updates with restricted randomness in dynamic graph algorithms and a general technique for analyzing the expected running time of an update operation. This model is able to capture the average case in many applications, since (1) it allows restrictions on the set of edges which can be used for insertions and (2) the type (insertion or deletion) of each update operation is arbitrary, i.e., not random. We use our technique to analyze existing and new dynamic algorithms for the following problems: maximum cardinality matching, minimum spanning forest, connectivity, 2- edge connectivity, k-edge connectivity, k-vertex connectivity, and bipartiteness. Given a random graph G with m0 edges and n vertices and a sequence of l update operations such that the graph contains mi edges after operation i, the expected time for performing the updates for any l is O(l log(n) + sum(i=1 to l) n/sqrt(m_i)) in the case of minimum spanning forests, connectivity, 2-edge connectivity, and bipartiteness. The expected time per update operation is O(n) in the case of maximum matching. We also give improved bounds for k-edge and k-vertex connectivity. Additionally we give an insertions-only algorithm for maximum cardinality matching with worst- case O(n) amortized time per insertion
Recurrent averaging inequalities in multi-agent control and social dynamics modeling
Many multi-agent control algorithms and dynamic agent-based models arising in
natural and social sciences are based on the principle of iterative averaging.
Each agent is associated to a value of interest, which may represent, for
instance, the opinion of an individual in a social group, the velocity vector
of a mobile robot in a flock, or the measurement of a sensor within a sensor
network. This value is updated, at each iteration, to a weighted average of
itself and of the values of the adjacent agents. It is well known that, under
natural assumptions on the network's graph connectivity, this local averaging
procedure eventually leads to global consensus, or synchronization of the
values at all nodes. Applications of iterative averaging include, but are not
limited to, algorithms for distributed optimization, for solution of linear and
nonlinear equations, for multi-robot coordination and for opinion formation in
social groups. Although these algorithms have similar structures, the
mathematical techniques used for their analysis are diverse, and conditions for
their convergence and differ from case to case. In this paper, we review many
of these algorithms and we show that their properties can be analyzed in a
unified way by using a novel tool based on recurrent averaging inequalities
(RAIs). We develop a theory of RAIs and apply it to the analysis of several
important multi-agent algorithms recently proposed in the literature
Recurrent Averaging Inequalities in Multi-Agent Control and Social Dynamics Modeling
Many multi-agent control algorithms and dynamic agent-based models arising in
natural and social sciences are based on the principle of iterative averaging.
Each agent is associated to a value of interest, which may represent, for
instance, the opinion of an individual in a social group, the velocity vector
of a mobile robot in a flock, or the measurement of a sensor within a sensor
network. This value is updated, at each iteration, to a weighted average of
itself and of the values of the adjacent agents. It is well known that, under
natural assumptions on the network's graph connectivity, this local averaging
procedure eventually leads to global consensus, or synchronization of the
values at all nodes. Applications of iterative averaging include, but are not
limited to, algorithms for distributed optimization, for solution of linear and
nonlinear equations, for multi-robot coordination and for opinion formation in
social groups. Although these algorithms have similar structures, the
mathematical techniques used for their analysis are diverse, and conditions for
their convergence and differ from case to case. In this paper, we review many
of these algorithms and we show that their properties can be analyzed in a
unified way by using a novel tool based on recurrent averaging inequalities
(RAIs). We develop a theory of RAIs and apply it to the analysis of several
important multi-agent algorithms recently proposed in the literature
Fully Dynamic Single-Source Reachability in Practice: An Experimental Study
Given a directed graph and a source vertex, the fully dynamic single-source
reachability problem is to maintain the set of vertices that are reachable from
the given vertex, subject to edge deletions and insertions. It is one of the
most fundamental problems on graphs and appears directly or indirectly in many
and varied applications. While there has been theoretical work on this problem,
showing both linear conditional lower bounds for the fully dynamic problem and
insertions-only and deletions-only upper bounds beating these conditional lower
bounds, there has been no experimental study that compares the performance of
fully dynamic reachability algorithms in practice. Previous experimental
studies in this area concentrated only on the more general all-pairs
reachability or transitive closure problem and did not use real-world dynamic
graphs.
In this paper, we bridge this gap by empirically studying an extensive set of
algorithms for the single-source reachability problem in the fully dynamic
setting. In particular, we design several fully dynamic variants of well-known
approaches to obtain and maintain reachability information with respect to a
distinguished source. Moreover, we extend the existing insertions-only or
deletions-only upper bounds into fully dynamic algorithms. Even though the
worst-case time per operation of all the fully dynamic algorithms we evaluate
is at least linear in the number of edges in the graph (as is to be expected
given the conditional lower bounds) we show in our extensive experimental
evaluation that their performance differs greatly, both on generated as well as
on real-world instances
Recommended from our members
Experimental evaluation of preprocessing algorithms for constraint satisfaction problems
This paper presents an experimental evaluation of two orthogonal schemes for preprocessing constraint satisfaction problems (CSPs). The first of these schemes involves a class of local consistency techniques that includes directional arc consistency, directional path consistency, and adaptive consistency. The other scheme concerns the prearrangement of variables in a linear order to facilitate an efficient search. In the first series of experiments, we evaluated the effect of each of the local consistency techniques on backtracking and its common enhancement, backjumping. Surprizingly, although adaptive consistency has the best worst-case complexity bounds, we have found that it exhibits the worst performance, unless the constraint graph was very sparse. Directional arc consistency (followed by either backjumping or backtracking) and backjumping (without any pre-processing) outperformed all other techniques; moreover, the former dominated the latter in computationally intensive situations. The second series of experiments suggests that maximum cardinality and minimum width arc the best pre-ordering (i.e., static ordering) strategies, while dynamic search rearrangement is superior to all the preorderings studied
Fully Dynamic Algorithm for Top- Densest Subgraphs
Given a large graph, the densest-subgraph problem asks to find a subgraph
with maximum average degree. When considering the top- version of this
problem, a na\"ive solution is to iteratively find the densest subgraph and
remove it in each iteration. However, such a solution is impractical due to
high processing cost. The problem is further complicated when dealing with
dynamic graphs, since adding or removing an edge requires re-running the
algorithm. In this paper, we study the top- densest-subgraph problem in the
sliding-window model and propose an efficient fully-dynamic algorithm. The
input of our algorithm consists of an edge stream, and the goal is to find the
node-disjoint subgraphs that maximize the sum of their densities. In contrast
to existing state-of-the-art solutions that require iterating over the entire
graph upon any update, our algorithm profits from the observation that updates
only affect a limited region of the graph. Therefore, the top- densest
subgraphs are maintained by only applying local updates. We provide a
theoretical analysis of the proposed algorithm and show empirically that the
algorithm often generates denser subgraphs than state-of-the-art competitors.
Experiments show an improvement in efficiency of up to five orders of magnitude
compared to state-of-the-art solutions.Comment: 10 pages, 8 figures, accepted at CIKM 201
- …