13,368 research outputs found
Near Optimal Sized Weight Tolerant Subgraph for Single Source Shortest Path
In this paper we address the problem of computing a sparse subgraph of a
weighted directed graph such that the exact distances from a designated source
vertex to all other vertices are preserved under bounded weight increment.
Finding a small sized subgraph that preserves distances between any pair of
vertices is a well studied problem. Since in the real world any network is
prone to failures, it is natural to study the fault tolerant version of the
above problem. Unfortunately, it turns out that there may not always exist such
a sparse subgraph even under single edge failure [Demetrescu \emph{et al.}
'08]. However in real applications it is not always the case that a link (edge)
in a network becomes completely faulty. Instead, it can happen that some links
become more congested which can easily be captured by increasing weight on the
corresponding edges. Thus it makes sense to try to construct a sparse distance
preserving subgraph under the above weight increment model. To the best of our
knowledge this problem has not been studied so far. In this paper we show that
given any weighted directed graph with vertices and a source vertex, one
can construct a subgraph that contains at most many edges
such that it preserves distances between the source and all other vertices as
long as the total weight increment is bounded by and we are allowed to have
only integer valued (can be negative) weight on each edge and also weight of an
edge can only be increased by some positive integer. Next we show a lower bound
of , for some constant , on the size of the subgraph.
We also argue that restriction of integer valued weight and integer valued
weight increment are actually essential by showing that if we remove any one of
these two restrictions we may need to store edges to preserve
distances
A Survey of Shortest-Path Algorithms
A shortest-path algorithm finds a path containing the minimal cost between
two vertices in a graph. A plethora of shortest-path algorithms is studied in
the literature that span across multiple disciplines. This paper presents a
survey of shortest-path algorithms based on a taxonomy that is introduced in
the paper. One dimension of this taxonomy is the various flavors of the
shortest-path problem. There is no one general algorithm that is capable of
solving all variants of the shortest-path problem due to the space and time
complexities associated with each algorithm. Other important dimensions of the
taxonomy include whether the shortest-path algorithm operates over a static or
a dynamic graph, whether the shortest-path algorithm produces exact or
approximate answers, and whether the objective of the shortest-path algorithm
is to achieve time-dependence or is to only be goal directed. This survey
studies and classifies shortest-path algorithms according to the proposed
taxonomy. The survey also presents the challenges and proposed solutions
associated with each category in the taxonomy
Drake: An Efficient Executive for Temporal Plans with Choice
This work presents Drake, a dynamic executive for temporal plans with choice.
Dynamic plan execution strategies allow an autonomous agent to react quickly to
unfolding events, improving the robustness of the agent. Prior work developed
methods for dynamically dispatching Simple Temporal Networks, and further
research enriched the expressiveness of the plans executives could handle,
including discrete choices, which are the focus of this work. However, in some
approaches to date, these additional choices induce significant storage or
latency requirements to make flexible execution possible.
Drake is designed to leverage the low latency made possible by a
preprocessing step called compilation, while avoiding high memory costs through
a compact representation. We leverage the concepts of labels and environments,
taken from prior work in Assumption-based Truth Maintenance Systems (ATMS), to
concisely record the implications of the discrete choices, exploiting the
structure of the plan to avoid redundant reasoning or storage. Our labeling and
maintenance scheme, called the Labeled Value Set Maintenance System, is
distinguished by its focus on properties fundamental to temporal problems, and,
more generally, weighted graph algorithms. In particular, the maintenance
system focuses on maintaining a minimal representation of non-dominated
constraints. We benchmark Drakes performance on random structured problems, and
find that Drake reduces the size of the compiled representation by a factor of
over 500 for large problems, while incurring only a modest increase in run-time
latency, compared to prior work in compiled executives for temporal plans with
discrete choices
A measure of centrality based on the network efficiency
We introduce a new measure of centrality, the information centrality C^I,
based on the concept of efficient propagation of information over the network.
C^I is defined for both valued and non-valued graphs, and applies to groups and
classes as well as individuals. The new measure is illustrated and compared to
the standard centrality measures by using a classic network data set.Comment: 16 pages, 5 figure
A measure of betweenness centrality based on random walks
Betweenness is a measure of the centrality of a node in a network, and is
normally calculated as the fraction of shortest paths between node pairs that
pass through the node of interest. Betweenness is, in some sense, a measure of
the influence a node has over the spread of information through the network. By
counting only shortest paths, however, the conventional definition implicitly
assumes that information spreads only along those shortest paths. Here we
propose a betweenness measure that relaxes this assumption, including
contributions from essentially all paths between nodes, not just the shortest,
although it still gives more weight to short paths. The measure is based on
random walks, counting how often a node is traversed by a random walk between
two other nodes. We show how our measure can be calculated using matrix
methods, and give some examples of its application to particular networks.Comment: 15 pages, 7 figures, 2 table
A Method to Find Community Structures Based on Information Centrality
Community structures are an important feature of many social, biological and
technological networks. Here we study a variation on the method for detecting
such communities proposed by Girvan and Newman and based on the idea of using
centrality measures to define the community boundaries (M. Girvan and M. E. J.
Newman, Community structure in social and biological networks Proc. Natl. Acad.
Sci. USA 99, 7821-7826 (2002)). We develop an algorithm of hierarchical
clustering that consists in finding and removing iteratively the edge with the
highest information centrality. We test the algorithm on computer generated and
real-world networks whose community structure is already known or has been
studied by means of other methods. We show that our algorithm, although it runs
to completion in a time O(n^4), is very effective especially when the
communities are very mixed and hardly detectable by the other methods.Comment: 13 pages, 13 figures. Final version accepted for publication in
Physical Review
Bottleneck potentials in Markov Random Fields
We consider general discrete Markov Random Fields(MRFs) with additional
bottleneck potentials which penalize the maximum (instead of the sum) over
local potential value taken by the MRF-assignment. Bottleneck potentials or
analogous constructions have been considered in (i) combinatorial optimization
(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree
problem, bottleneck function minimization in greedoids), (ii) inverse problems
with -norm regularization, and (iii) valued constraint satisfaction
on the -pre-semirings. Bottleneck potentials for general discrete
MRFs are a natural generalization of the above direction of modeling work to
Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs
whose objective consists of two parts: terms that factorize according to (i)
, i.e. potentials as in plain MRFs, and (ii) , i.e.
bottleneck potentials. To solve the ensuing inference problem, we propose
high-quality relaxations and efficient algorithms for solving them. We
empirically show efficacy of our approach on large scale seismic horizon
tracking problems.Comment: Published in ICCV 2019 as Ora
Semi-dynamic shortest-path tree algorithms for directed graphs with arbitrary weights
Given a directed graph with arbitrary real-valued weights, the single
source shortest-path problem (SSSP) asks for, given a source in ,
finding a shortest path from to each vertex in . A classical SSSP
algorithm detects a negative cycle of or constructs a shortest-path tree
(SPT) rooted at in time, where are the numbers of edges and
vertices in respectively. In many practical applications, new constraints
come from time to time and we need to update the SPT frequently. Given an SPT
of , suppose the weight on a certain edge is modified. We show by
rigorous proof that the well-known {\sf Ball-String} algorithm for positively
weighted graphs can be adapted to solve the dynamic SPT problem for directed
graphs with arbitrary weights. Let be the number of vertices that are
affected (i.e., vertices that have different distances from or different
parents in the input and output SPTs) and the number of edges incident to
an affected vertex. The adapted algorithms terminate in
time, either detecting a negative cycle (only in the decremental case) or
constructing a new SPT for the updated graph. We show by an example that
the output SPT may have more than necessary edge changes to . To remedy
this, we give a general method for transforming into an SPT with minimal
edge changes in time provided that has no cycles with zero length.Comment: 27 pages, 3 figure
Learning to Prune: Speeding up Repeated Computations
It is common to encounter situations where one must solve a sequence of
similar computational problems. Running a standard algorithm with worst-case
runtime guarantees on each instance will fail to take advantage of valuable
structure shared across the problem instances. For example, when a commuter
drives from work to home, there are typically only a handful of routes that
will ever be the shortest path. A naive algorithm that does not exploit this
common structure may spend most of its time checking roads that will never be
in the shortest path. More generally, we can often ignore large swaths of the
search space that will likely never contain an optimal solution.
We present an algorithm that learns to maximally prune the search space on
repeated computations, thereby reducing runtime while provably outputting the
correct solution each period with high probability. Our algorithm employs a
simple explore-exploit technique resembling those used in online algorithms,
though our setting is quite different. We prove that, with respect to our model
of pruning search spaces, our approach is optimal up to constant factors.
Finally, we illustrate the applicability of our model and algorithm to three
classic problems: shortest-path routing, string search, and linear programming.
We present experiments confirming that our simple algorithm is effective at
significantly reducing the runtime of solving repeated computations
Partial order approach to compute shortest paths in multimodal networks
Many networked systems involve multiple modes of transport. Such systems are
called multimodal, and examples include logistic networks, biomedical
phenomena, manufacturing process and telecommunication networks. Existing
techniques for determining optimal paths in multimodal networks have either
required heuristics or else application-specific constraints to obtain
tractable problems, removing the multimodal traits of the network during
analysis. In this paper weighted coloured--edge graphs are introduced to model
multimodal networks, where colours represent the modes of transportation.
Optimal paths are selected using a partial order that compares the weights in
each colour, resulting in a Pareto optimal set of shortest paths. This approach
is shown to be tractable through experimental analyses for random and real
multimodal networks without the need to apply heuristics or constraints
- …