8,793 research outputs found
Local Approximation Schemes for Ad Hoc and Sensor Networks
We present two local approaches that yield polynomial-time approximation schemes (PTAS) for the Maximum Independent Set and Minimum Dominating Set problem in unit disk graphs. The algorithms run locally in each node and compute a (1+ε)-approximation to the problems at hand for any given ε > 0. The time complexity of both algorithms is O(TMIS + log*! n/εO(1)), where TMIS is the time required to compute a maximal independent set in the graph, and n denotes the number of nodes. We then extend these results to a more general class of graphs in which the maximum number of pair-wise independent nodes in every r-neighborhood is at most polynomial in r. Such graphs of polynomially bounded growth are introduced as a more realistic model for wireless networks and they generalize existing models, such as unit disk graphs or coverage area graphs
Local Multicoloring Algorithms: Computing a Nearly-Optimal TDMA Schedule in Constant Time
The described multicoloring problem has direct applications in the context of
wireless ad hoc and sensor networks. In order to coordinate the access to the
shared wireless medium, the nodes of such a network need to employ some medium
access control (MAC) protocol. Typical MAC protocols control the access to the
shared channel by time (TDMA), frequency (FDMA), or code division multiple
access (CDMA) schemes. Many channel access schemes assign a fixed set of time
slots, frequencies, or (orthogonal) codes to the nodes of a network such that
nodes that interfere with each other receive disjoint sets of time slots,
frequencies, or code sets. Finding a valid assignment of time slots,
frequencies, or codes hence directly corresponds to computing a multicoloring
of a graph . The scarcity of bandwidth, energy, and computing resources in
ad hoc and sensor networks, as well as the often highly dynamic nature of these
networks require that the multicoloring can be computed based on as little and
as local information as possible
Self-stabilizing algorithms for Connected Vertex Cover and Clique decomposition problems
In many wireless networks, there is no fixed physical backbone nor
centralized network management. The nodes of such a network have to
self-organize in order to maintain a virtual backbone used to route messages.
Moreover, any node of the network can be a priori at the origin of a malicious
attack. Thus, in one hand the backbone must be fault-tolerant and in other hand
it can be useful to monitor all network communications to identify an attack as
soon as possible. We are interested in the minimum \emph{Connected Vertex
Cover} problem, a generalization of the classical minimum Vertex Cover problem,
which allows to obtain a connected backbone. Recently, Delbot et
al.~\cite{DelbotLP13} proposed a new centralized algorithm with a constant
approximation ratio of for this problem. In this paper, we propose a
distributed and self-stabilizing version of their algorithm with the same
approximation guarantee. To the best knowledge of the authors, it is the first
distributed and fault-tolerant algorithm for this problem. The approach
followed to solve the considered problem is based on the construction of a
connected minimal clique partition. Therefore, we also design the first
distributed self-stabilizing algorithm for this problem, which is of
independent interest
Distributed Symmetry Breaking in Hypergraphs
Fundamental local symmetry breaking problems such as Maximal Independent Set
(MIS) and coloring have been recognized as important by the community, and
studied extensively in (standard) graphs. In particular, fast (i.e.,
logarithmic run time) randomized algorithms are well-established for MIS and
-coloring in both the LOCAL and CONGEST distributed computing
models. On the other hand, comparatively much less is known on the complexity
of distributed symmetry breaking in {\em hypergraphs}. In particular, a key
question is whether a fast (randomized) algorithm for MIS exists for
hypergraphs.
In this paper, we study the distributed complexity of symmetry breaking in
hypergraphs by presenting distributed randomized algorithms for a variety of
fundamental problems under a natural distributed computing model for
hypergraphs. We first show that MIS in hypergraphs (of arbitrary dimension) can
be solved in rounds ( is the number of nodes of the
hypergraph) in the LOCAL model. We then present a key result of this paper ---
an -round hypergraph MIS algorithm in
the CONGEST model where is the maximum node degree of the hypergraph
and is any arbitrarily small constant.
To demonstrate the usefulness of hypergraph MIS, we present applications of
our hypergraph algorithm to solving problems in (standard) graphs. In
particular, the hypergraph MIS yields fast distributed algorithms for the {\em
balanced minimal dominating set} problem (left open in Harris et al. [ICALP
2013]) and the {\em minimal connected dominating set problem}. We also present
distributed algorithms for coloring, maximal matching, and maximal clique in
hypergraphs.Comment: Changes from the previous version: More references adde
Networked Computing in Wireless Sensor Networks for Structural Health Monitoring
This paper studies the problem of distributed computation over a network of
wireless sensors. While this problem applies to many emerging applications, to
keep our discussion concrete we will focus on sensor networks used for
structural health monitoring. Within this context, the heaviest computation is
to determine the singular value decomposition (SVD) to extract mode shapes
(eigenvectors) of a structure. Compared to collecting raw vibration data and
performing SVD at a central location, computing SVD within the network can
result in significantly lower energy consumption and delay. Using recent
results on decomposing SVD, a well-known centralized operation, into
components, we seek to determine a near-optimal communication structure that
enables the distribution of this computation and the reassembly of the final
results, with the objective of minimizing energy consumption subject to a
computational delay constraint. We show that this reduces to a generalized
clustering problem; a cluster forms a unit on which a component of the overall
computation is performed. We establish that this problem is NP-hard. By
relaxing the delay constraint, we derive a lower bound to this problem. We then
propose an integer linear program (ILP) to solve the constrained problem
exactly as well as an approximate algorithm with a proven approximation ratio.
We further present a distributed version of the approximate algorithm. We
present both simulation and experimentation results to demonstrate the
effectiveness of these algorithms
- …