40,564 research outputs found
Energy-Efficient Algorithms
We initiate the systematic study of the energy complexity of algorithms (in
addition to time and space complexity) based on Landauer's Principle in
physics, which gives a lower bound on the amount of energy a system must
dissipate if it destroys information. We propose energy-aware variations of
three standard models of computation: circuit RAM, word RAM, and
transdichotomous RAM. On top of these models, we build familiar high-level
primitives such as control logic, memory allocation, and garbage collection
with zero energy complexity and only constant-factor overheads in space and
time complexity, enabling simple expression of energy-efficient algorithms. We
analyze several classic algorithms in our models and develop low-energy
variations: comparison sort, insertion sort, counting sort, breadth-first
search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL
trees, binary heaps, and dynamic arrays. We explore the time/space/energy
trade-off and develop several general techniques for analyzing algorithms and
reducing their energy complexity. These results lay a theoretical foundation
for a new field of semi-reversible computing and provide a new framework for
the investigation of algorithms.Comment: 40 pages, 8 pdf figures, full version of work published in ITCS 201
Towards Scalable Network Delay Minimization
Reduction of end-to-end network delays is an optimization task with
applications in multiple domains. Low delays enable improved information flow
in social networks, quick spread of ideas in collaboration networks, low travel
times for vehicles on road networks and increased rate of packets in the case
of communication networks. Delay reduction can be achieved by both improving
the propagation capabilities of individual nodes and adding additional edges in
the network. One of the main challenges in such design problems is that the
effects of local changes are not independent, and as a consequence, there is a
combinatorial search-space of possible improvements. Thus, minimizing the
cumulative propagation delay requires novel scalable and data-driven
approaches.
In this paper, we consider the problem of network delay minimization via node
upgrades. Although the problem is NP-hard, we show that probabilistic
approximation for a restricted version can be obtained. We design scalable and
high-quality techniques for the general setting based on sampling and targeted
to different models of delay distribution. Our methods scale almost linearly
with the graph size and consistently outperform competitors in quality
Speeding up shortest path algorithms
Given an arbitrary, non-negatively weighted, directed graph we
present an algorithm that computes all pairs shortest paths in time
, where is the number of
different edges contained in shortest paths and is a running
time of an algorithm to solve a single-source shortest path problem (SSSP).
This is a substantial improvement over a trivial times application of
that runs in . In our algorithm we use
as a black box and hence any improvement on results also in improvement
of our algorithm.
Furthermore, a combination of our method, Johnson's reweighting technique and
topological sorting results in an all-pairs
shortest path algorithm for arbitrarily-weighted directed acyclic graphs.
In addition, we also point out a connection between the complexity of a
certain sorting problem defined on shortest paths and SSSP.Comment: 10 page
Shortest Distances as Enumeration Problem
We investigate the single source shortest distance (SSSD) and all pairs
shortest distance (APSD) problems as enumeration problems (on unweighted and
integer weighted graphs), meaning that the elements -- where
and are vertices with shortest distance -- are produced and
listed one by one without repetition. The performance is measured in the RAM
model of computation with respect to preprocessing time and delay, i.e., the
maximum time that elapses between two consecutive outputs. This point of view
reveals that specific types of output (e.g., excluding the non-reachable pairs
, or excluding the self-distances ) and the order of
enumeration (e.g., sorted by distance, sorted row-wise with respect to the
distance matrix) have a huge impact on the complexity of APSD while they appear
to have no effect on SSSD.
In particular, we show for APSD that enumeration without output restrictions
is possible with delay in the order of the average degree. Excluding
non-reachable pairs, or requesting the output to be sorted by distance,
increases this delay to the order of the maximum degree. Further, for weighted
graphs, a delay in the order of the average degree is also not possible without
preprocessing or considering self-distances as output. In contrast, for SSSD we
find that a delay in the order of the maximum degree without preprocessing is
attainable and unavoidable for any of these requirements.Comment: Updated version adds the study of space complexit
Shortest Path and Distance Queries on Road Networks: An Experimental Evaluation
Computing the shortest path between two given locations in a road network is
an important problem that finds applications in various map services and
commercial navigation products. The state-of-the-art solutions for the problem
can be divided into two categories: spatial-coherence-based methods and
vertex-importance-based approaches. The two categories of techniques, however,
have not been compared systematically under the same experimental framework, as
they were developed from two independent lines of research that do not refer to
each other. This renders it difficult for a practitioner to decide which
technique should be adopted for a specific application. Furthermore, the
experimental evaluation of the existing techniques, as presented in previous
work, falls short in several aspects. Some methods were tested only on small
road networks with up to one hundred thousand vertices; some approaches were
evaluated using distance queries (instead of shortest path queries), namely,
queries that ask only for the length of the shortest path; a state-of-the-art
technique was examined based on a faulty implementation that led to incorrect
query results. To address the above issues, this paper presents a comprehensive
comparison of the most advanced spatial-coherence-based and
vertex-importance-based approaches. Using a variety of real road networks with
up to twenty million vertices, we evaluated each technique in terms of its
preprocessing time, space consumption, and query efficiency (for both shortest
path and distance queries). Our experimental results reveal the characteristics
of different techniques, based on which we provide guidelines on selecting
appropriate methods for various scenarios.Comment: VLDB201
Algebraic Methods in the Congested Clique
In this work, we use algebraic methods for studying distance computation and
subgraph detection tasks in the congested clique model. Specifically, we adapt
parallel matrix multiplication implementations to the congested clique,
obtaining an round matrix multiplication algorithm, where
is the exponent of matrix multiplication. In conjunction
with known techniques from centralised algorithmics, this gives significant
improvements over previous best upper bounds in the congested clique model. The
highlight results include:
-- triangle and 4-cycle counting in rounds, improving upon the
triangle detection algorithm of Dolev et al. [DISC 2012],
-- a -approximation of all-pairs shortest paths in
rounds, improving upon the -round -approximation algorithm of Nanongkai [STOC 2014], and
-- computing the girth in rounds, which is the first
non-trivial solution in this model.
In addition, we present a novel constant-round combinatorial algorithm for
detecting 4-cycles.Comment: This is work is a merger of arxiv:1412.2109 and arxiv:1412.266
Scalable Online Betweenness Centrality in Evolving Graphs
Betweenness centrality is a classic measure that quantifies the importance of
a graph element (vertex or edge) according to the fraction of shortest paths
passing through it. This measure is notoriously expensive to compute, and the
best known algorithm runs in O(nm) time. The problems of efficiency and
scalability are exacerbated in a dynamic setting, where the input is an
evolving graph seen edge by edge, and the goal is to keep the betweenness
centrality up to date. In this paper we propose the first truly scalable
algorithm for online computation of betweenness centrality of both vertices and
edges in an evolving graph where new edges are added and existing edges are
removed. Our algorithm is carefully engineered with out-of-core techniques and
tailored for modern parallel stream processing engines that run on clusters of
shared-nothing commodity hardware. Hence, it is amenable to real-world
deployment. We experiment on graphs that are two orders of magnitude larger
than previous studies. Our method is able to keep the betweenness centrality
measures up to date online, i.e., the time to update the measures is smaller
than the inter-arrival time between two consecutive updates.Comment: 15 pages, 9 Figures, accepted for publication in IEEE Transactions on
Knowledge and Data Engineerin
- …