6 research outputs found
Shortest-weight paths in random regular graphs
Consider a random regular graph with degree and of size . Assign to
each edge an i.i.d. exponential random variable with mean one. In this paper we
establish a precise asymptotic expression for the maximum number of edges on
the shortest-weight paths between a fixed vertex and all the other vertices, as
well as between any pair of vertices. Namely, for any fixed , we show
that the longest of these shortest-weight paths has about
edges where is the unique solution of the equation , for .Comment: 20 pages. arXiv admin note: text overlap with arXiv:1112.633
A forward-backward single-source shortest paths algorithm
We describe a new forward-backward variant of Dijkstra's and Spira's
Single-Source Shortest Paths (SSSP) algorithms. While essentially all SSSP
algorithm only scan edges forward, the new algorithm scans some edges backward.
The new algorithm assumes that edges in the outgoing and incoming adjacency
lists of the vertices appear in non-decreasing order of weight. (Spira's
algorithm makes the same assumption about the outgoing adjacency lists, but
does not use incoming adjacency lists.) The running time of the algorithm on a
complete directed graph on vertices with independent exponential edge
weights is , with very high probability. This improves on the previously
best result of , which is best possible if only forward scans are
allowed, exhibiting an interesting separation between forward-only and
forward-backward SSSP algorithms. As a consequence, we also get a new all-pairs
shortest paths algorithm. The expected running time of the algorithm on
complete graphs with independent exponential edge weights is , matching
a recent algorithm of Demetrescu and Italiano as analyzed by Peres et al.
Furthermore, the probability that the new algorithm requires more than
time is exponentially small, improving on the probability bound
obtained by Peres et al
Random Shortest Paths: {Non-Euclidean} Instances for Metric Optimization Problems
Probabilistic analysis for metric optimization problems has mostly been conducted on random Euclidean instances, but little is known about metric instances drawn from distributions other than the Euclidean. This motivates our study of random metric instances for optimization problems obtained as follows: Every edge of a complete graph gets a weight drawn independently at random. The distance between two nodes is then the length of a shortest path (with respect to the weights drawn) that connects these nodes. We prove structural properties of the random shortest path metrics generated in this way. Our main structural contribution is the construction of a good clustering. Then we apply these findings to analyze the approximation ratios of heuristics for matching, the traveling salesman problem (TSP), and the k-median problem, as well as the running-time of the 2-opt heuristic for the TSP. The bounds that we obtain are considerably better than the respective worst-case bounds. This suggests that random shortest path metrics are easy instances, similar to random Euclidean instances, albeit for completely different structural reasons
Public-Key Cryptography in the Fine-Grained Setting
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if , or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.
A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity.
Ball, Rosen, Sabin, and Vasudevan [BRSV\u2717] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function.
Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.
The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC).
Our main contribution is a novel construction of a cryptographic key exchange,
together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero--Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.
Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in time secure against adversaries (see [Merkle\u2778] and [BGI\u2708]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC
Reducing the Cost of Operating a Datacenter Network
Datacenters are a significant capital expense for many enterprises. Yet, they are difficult to manage and are hard to design and maintain. The initial design of a datacenter network tends to follow vendor guidelines, but subsequent upgrades and expansions to it are mostly ad hoc, with equipment being upgraded piecemeal after its amortization period runs out and equipment acquisition is tied to budget cycles rather than changes in workload.
These networks are also brittle and inflexible. They tend to be manually managed, and cannot perform dynamic traffic engineering.
The high-level goal of this dissertation is to reduce the total cost of owning a datacenter by improving its network. To achieve this, we make the following contributions. First, we develop an automated, theoretically well-founded approach to planning cost-effective datacenter upgrades and expansions. Second, we propose a scalable traffic management framework for datacenter networks. Together, we show that these contributions can significantly reduce the cost of operating a datacenter network.
To design cost-effective network topologies, especially as the network expands over time, updated equipment must coexist with legacy equipment, which makes the network heterogeneous. However, heterogeneous high-performance network designs are not well understood. Our first step, therefore, is to develop the theory of heterogeneous Clos topologies. Using our theory, we propose an optimization framework, called LEGUP, which designs a heterogeneous Clos network to implement in a new or legacy datacenter. Although effective, LEGUP imposes a certain amount of structure on the network. To deal with situations when this is infeasible, our second contribution is a framework, called REWIRE, which using optimization to design unstructured DCN topologies. Our results indicate that these unstructured topologies have up to 100-500\% more bisection bandwidth than a fat-tree for the same dollar cost.
Our third contribution is two frameworks for datacenter network traffic engineering. Because of the multiplicity of end-to-end paths in DCN fabrics, such as Clos networks and the topologies designed by REWIRE, careful traffic engineering is needed to maximize throughput. This requires timely detection of elephant flows---flows that carry large amount of data---and management of those flows. Previously proposed approaches incur high monitoring overheads, consume significant switch resources, or have long detection times.
We make two proposals for elephant flow detection. First, in the Mahout framework, we suggest that such flows be detected by observing the end hosts' socket buffers, which provide efficient visibility of flow behavior. Second, in the DevoFlow framework, we add efficient stats-collection mechanisms to network switches. Using simulations and experiments, we show that these frameworks reduce traffic engineering overheads by at least an order of magnitude while still providing near-optimal performance