568 research outputs found
High speed all optical networks
An inherent problem of conventional point-to-point wide area network (WAN) architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. The first solution to wavelength division multiplexing (WDM) based WAN networks that overcomes this limitation is presented. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs. The principle of the Lightnet architecture is the construction and use of virtual topology networks, embedded in the original network in the wavelength domain. For this construction Lightnets utilize the new concept of lightpaths which constitute the links of the virtual topology. Lightpaths are all-optical, multihop, paths in the network that allow data to be switched through intermediate nodes using high throughput passive optical switches. The use of the virtual topologies and the associated switching design introduce a number of new ideas, which are discussed in detail
High speed all-optical networks
An inherent problem of conventional point-to-point WAN architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. This report presents the first solution to WDM based WAN networks that overcomes this limitation. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs
Detecting High Log-Densities -- an O(n^1/4) Approximation for Densest k-Subgraph
In the Densest k-Subgraph problem, given a graph G and a parameter k, one
needs to find a subgraph of G induced on k vertices that contains the largest
number of edges. There is a significant gap between the best known upper and
lower bounds for this problem. It is NP-hard, and does not have a PTAS unless
NP has subexponential time algorithms. On the other hand, the current best
known algorithm of Feige, Kortsarz and Peleg, gives an approximation ratio of
n^(1/3-epsilon) for some specific epsilon > 0 (estimated at around 1/60).
We present an algorithm that for every epsilon > 0 approximates the Densest
k-Subgraph problem within a ratio of n^(1/4+epsilon) in time n^O(1/epsilon). In
particular, our algorithm achieves an approximation ratio of O(n^1/4) in time
n^O(log n). Our algorithm is inspired by studying an average-case version of
the problem where the goal is to distinguish random graphs from graphs with
planted dense subgraphs. The approximation ratio we achieve for the general
case matches the distinguishing ratio we obtain for this planted problem.
At a high level, our algorithms involve cleverly counting appropriately
defined trees of constant size in G, and using these counts to identify the
vertices of the dense subgraph. Our algorithm is based on the following
principle. We say that a graph G(V,E) has log-density alpha if its average
degree is Theta(|V|^alpha). The algorithmic core of our result is a family of
algorithms that output k-subgraphs of nontrivial density whenever the
log-density of the densest k-subgraph is larger than the log-density of the
host graph.Comment: 23 page
Linear Index Coding via Semidefinite Programming
In the index coding problem, introduced by Birk and Kol (INFOCOM, 1998), the
goal is to broadcast an n bit word to n receivers (one bit per receiver), where
the receivers have side information represented by a graph G. The objective is
to minimize the length of a codeword sent to all receivers which allows each
receiver to learn its bit. For linear index coding, the minimum possible length
is known to be equal to a graph parameter called minrank (Bar-Yossef et al.,
FOCS, 2006).
We show a polynomial time algorithm that, given an n vertex graph G with
minrank k, finds a linear index code for G of length ,
where f(k) depends only on k. For example, for k=3 we obtain f(3) ~ 0.2574. Our
algorithm employs a semidefinite program (SDP) introduced by Karger, Motwani
and Sudan (J. ACM, 1998) for graph coloring and its refined analysis due to
Arora, Chlamtac and Charikar (STOC, 2006). Since the SDP we use is not a
relaxation of the minimization problem we consider, a crucial component of our
analysis is an upper bound on the objective value of the SDP in terms of the
minrank.
At the heart of our analysis lies a combinatorial result which may be of
independent interest. Namely, we show an exact expression for the maximum
possible value of the Lovasz theta-function of a graph with minrank k. This
yields a tight gap between two classical upper bounds on the Shannon capacity
of a graph.Comment: 24 page
Call Admission Control : Solution of a General Decision Model with State Related Hand-off Rate
This paper studies call admission policies for access control in cellular networks by means of a Markov Decision
Process (MDP). This approach allows us to study a wide class of policies, including well known pure stationary as well
as randomized policies, in a way that explicitly incorporates the dependency between the hand-off rate and the system
state, assuming that the hand-off rate arriving to a cell is proportional to the occupancy level of the adjacent cells.
In particular, we propose and analyze a nonpreemptive prioritization scheme, we term the cutoff priority policy. This
policy consists of reserving a number of channels for the high priority requests stream. Using our analytical approach,
we prove the proposed scheme to be optimal within the analyzed class
Lower Bounds for Structuring Unreliable Radio Networks
In this paper, we study lower bounds for randomized solutions to the maximal
independent set (MIS) and connected dominating set (CDS) problems in the dual
graph model of radio networks---a generalization of the standard graph-based
model that now includes unreliable links controlled by an adversary. We begin
by proving that a natural geographic constraint on the network topology is
required to solve these problems efficiently (i.e., in time polylogarthmic in
the network size). We then prove the importance of the assumption that nodes
are provided advance knowledge of their reliable neighbors (i.e, neighbors
connected by reliable links). Combined, these results answer an open question
by proving that the efficient MIS and CDS algorithms from [Censor-Hillel, PODC
2011] are optimal with respect to their dual graph model assumptions. They also
provide insight into what properties of an unreliable network enable efficient
local computation.Comment: An extended abstract of this work appears in the 2014 proceedings of
the International Symposium on Distributed Computing (DISC
Hardness of Graph Pricing through Generalized Max-Dicut
The Graph Pricing problem is among the fundamental problems whose
approximability is not well-understood. While there is a simple combinatorial
1/4-approximation algorithm, the best hardness result remains at 1/2 assuming
the Unique Games Conjecture (UGC). We show that it is NP-hard to approximate
within a factor better than 1/4 under the UGC, so that the simple combinatorial
algorithm might be the best possible. We also prove that for any , there exists such that the integrality gap of
-rounds of the Sherali-Adams hierarchy of linear programming for
Graph Pricing is at most 1/2 + .
This work is based on the effort to view the Graph Pricing problem as a
Constraint Satisfaction Problem (CSP) simpler than the standard and complicated
formulation. We propose the problem called Generalized Max-Dicut(), which
has a domain size for every . Generalized Max-Dicut(1) is
well-known Max-Dicut. There is an approximation-preserving reduction from
Generalized Max-Dicut on directed acyclic graphs (DAGs) to Graph Pricing, and
both our results are achieved through this reduction. Besides its connection to
Graph Pricing, the hardness of Generalized Max-Dicut is interesting in its own
right since in most arity two CSPs studied in the literature, SDP-based
algorithms perform better than LP-based or combinatorial algorithms --- for
this arity two CSP, a simple combinatorial algorithm does the best.Comment: 28 page
- …