79,584 research outputs found
A bandwidth theorem for approximate decompositions
We provide a degree condition on a regular -vertex graph which ensures
the existence of a near optimal packing of any family of bounded
degree -vertex -chromatic separable graphs into . In general, this
degree condition is best possible.
Here a graph is separable if it has a sublinear separator whose removal
results in a set of components of sublinear size. Equivalently, the
separability condition can be replaced by that of having small bandwidth. Thus
our result can be viewed as a version of the bandwidth theorem of B\"ottcher,
Schacht and Taraz in the setting of approximate decompositions.
More precisely, let be the infimum over all
ensuring an approximate -decomposition of any sufficiently large regular
-vertex graph of degree at least . Now suppose that is an
-vertex graph which is close to -regular for some and suppose that is a sequence of bounded
degree -vertex -chromatic separable graphs with . We show that there is an edge-disjoint packing of
into .
If the are bipartite, then is sufficient. In
particular, this yields an approximate version of the tree packing conjecture
in the setting of regular host graphs of high degree. Similarly, our result
implies approximate versions of the Oberwolfach problem, the Alspach problem
and the existence of resolvable designs in the setting of regular host graphs
of high degree.Comment: Final version, to appear in the Proceedings of the London
Mathematical Societ
A combinatorial approach to optimal designs.
PhDA typical problem in experimental design theory is to find a block
design in a class that is optimal with respect to some criteria, which
are usually convex functions of the Laplacian eigenvalues. Although this
question has a statistical background, there are overlaps with graph and
design theory: some of the optimality criteria correspond to graph properties
and designs considered ‘nice’ by combinatorialists are often optimal.
In this thesis we investigate this connection from a combinatorial point
of view.
We extend a result on optimality of some generalized polygons, in
particular the generalized hexagon and octagon, to a third optimality criterion.
The E-criterion is equivalent with the graph theoretical problem
of maximizing the algebraic connectivity. We give a new upper bound for
regular graphs and characterize a class of E-optimal regular graph designs
(RGDs). We then study generalized hexagons as block designs and
prove some properties of the eigenvalues of the designs in that class. Proceeding
to higher-dimensional geometries, we look at projective spaces
and find optimal designs among two-dimensional substructures. Some
new properties of Grassmann graphs are proved. Stepping away from
the background of geometries, we study graphs obtained from optimal
graphs by deleting one or several edges. This chapter highlights the currently
available methods to compare graphs on the A- and D-criteria.
The last chapter is devoted to designs to which a number of blocks are
added. Cheng showed that RGDs are A- and D-optimal if the number of
blocks is large enough for which we give a bound and characterize the best
RGDs in terms of their underlying graphs. We then present the results
of an exhaustive computer search for optimal RGDs for up to 18 points.
The search produced examples supporting several open conjectures
Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks
The performance of large-scale computing systems often critically depends on
high-performance communication networks. Dynamically reconfigurable topologies,
e.g., based on optical circuit switches, are emerging as an innovative new
technology to deal with the explosive growth of datacenter traffic.
Specifically, periodic reconfigurable datacenter networks (RDCNs) such as
RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been
shown to provide high throughput, by emulating a complete graph through fast
periodic circuit switch scheduling.
However, to achieve such a high throughput, existing reconfigurable network
designs pay a high price: in terms of potentially high delays, but also, as we
show as a first contribution in this paper, in terms of the high buffer
requirements. In particular, we show that under buffer constraints, emulating
the high-throughput complete-graph is infeasible at scale, and we uncover a
spectrum of unvisited and attractive alternative RDCNs, which emulate regular
graphs of lower node degree.
We present Mars, a periodic reconfigurable topology which emulates a
-regular graph with near-optimal throughput. In particular, we
systematically analyze how the degree can be optimized for throughput given
the available buffer and delay tolerance of the datacenter
A Comprehensive Methodology for Algorithm Characterization, Regularization and Mapping Into Optimal VLSI Arrays.
This dissertation provides a fairly comprehensive treatment of a broad class of algorithms as it pertains to systolic implementation. We describe some formal algorithmic transformations that can be utilized to map regular and some irregular compute-bound algorithms into the best fit time-optimal systolic architectures. The resulted architectures can be one-dimensional, two-dimensional, three-dimensional or nonplanar. The methodology detailed in the dissertation employs, like other methods, the concept of dependence vector to order, in space and time, the index points representing the algorithm. However, by differentiating between two types of dependence vectors, the ordering procedure is allowed to be flexible and time optimal. Furthermore, unlike other methodologies, the approach reported here does not put constraints on the topology or dimensionality of the target architecture. The ordered index points are represented by nodes in a diagram called Systolic Precedence Diagram (SPD). The SPD is a form of precedence graph that takes into account the systolic operation requirements of strictly local communications and regular data flow. Therefore, any algorithm with variable dependence vectors has to be transformed into a regular indexed set of computations with local dependencies. This can be done by replacing variable dependence vectors with sets of fixed dependence vectors. The SPD is transformed into an acyclic, labeled, directed graph called the Systolic Directed Graph (SDG). The SDG models the data flow as well as the timing for the execution of the given algorithm on a time-optimal array. The target architectures are obtained by projecting the SDG along defined directions. If more than one valid projection direction exists, different designs are obtained. The resulting architectures are then evaluated to determine if an improvement in the performance can be achieved by increasing PE fan-out. If so, the methodology provides the corresponding systolic implementation. By employing a new graph transformation, the SDG is manipulated so that it can be mapped into fixed-size and fixed-depth multi-linear arrays. The latter is a new concept of systolic arrays that is adaptable to changes in the state of technology. It promises a bonded clock skew, higher throughput and better performance than the linear implementation
Group testing with Random Pools: Phase Transitions and Optimal Strategy
The problem of Group Testing is to identify defective items out of a set of
objects by means of pool queries of the form "Does the pool contain at least a
defective?". The aim is of course to perform detection with the fewest possible
queries, a problem which has relevant practical applications in different
fields including molecular biology and computer science. Here we study GT in
the probabilistic setting focusing on the regime of small defective probability
and large number of objects, and . We construct and
analyze one-stage algorithms for which we establish the occurrence of a
non-detection/detection phase transition resulting in a sharp threshold, , for the number of tests. By optimizing the pool design we construct
algorithms whose detection threshold follows the optimal scaling . Then we consider two-stages algorithms and analyze their
performance for different choices of the first stage pools. In particular, via
a proper random choice of the pools, we construct algorithms which attain the
optimal value (previously determined in Ref. [16]) for the mean number of tests
required for complete detection. We finally discuss the optimal pool design in
the case of finite
- …