57 research outputs found

    On Tree-Partition-Width

    Get PDF
    A \emph{tree-partition} of a graph GG is a proper partition of its vertex set into `bags', such that identifying the vertices in each bag produces a forest. The \emph{tree-partition-width} of GG is the minimum number of vertices in a bag in a tree-partition of GG. An anonymous referee of the paper by Ding and Oporowski [\emph{J. Graph Theory}, 1995] proved that every graph with tree-width k3k\geq3 and maximum degree Δ1\Delta\geq1 has tree-partition-width at most 24kΔ24k\Delta. We prove that this bound is within a constant factor of optimal. In particular, for all k3k\geq3 and for all sufficiently large Δ\Delta, we construct a graph with tree-width kk, maximum degree Δ\Delta, and tree-partition-width at least (\eighth-\epsilon)k\Delta. Moreover, we slightly improve the upper bound to 5/2(k+1)(7/2Δ1){5/2}(k+1)({7/2}\Delta-1) without the restriction that k3k\geq3

    Tree-Partitions with Small Bounded Degree Trees

    Full text link
    A "tree-partition" of a graph GG is a partition of V(G)V(G) such that identifying the vertices in each part gives a tree. It is known that every graph with treewidth kk and maximum degree Δ\Delta has a tree-partition with parts of size O(kΔ)O(k\Delta). We prove the same result with the extra property that the underlying tree has maximum degree O(Δ)O(\Delta) and O(V(G)/k)O(|V(G)|/k) vertices

    Speeding up neighborhood search in local Gaussian process prediction

    Full text link
    Recent implementations of local approximate Gaussian process models have pushed computational boundaries for non-linear, non-parametric prediction problems, particularly when deployed as emulators for computer experiments. Their flavor of spatially independent computation accommodates massive parallelization, meaning that they can handle designs two or more orders of magnitude larger than previously. However, accomplishing that feat can still require massive supercomputing resources. Here we aim to ease that burden. We study how predictive variance is reduced as local designs are built up for prediction. We then observe how the exhaustive and discrete nature of an important search subroutine involved in building such local designs may be overly conservative. Rather, we suggest that searching the space radially, i.e., continuously along rays emanating from the predictive location of interest, is a far thriftier alternative. Our empirical work demonstrates that ray-based search yields predictors with accuracy comparable to exhaustive search, but in a fraction of the time - bringing a supercomputer implementation back onto the desktop.Comment: 24 pages, 5 figures, 4 table

    Product structure of graph classes with strongly sublinear separators

    Full text link
    We investigate the product structure of hereditary graph classes admitting strongly sublinear separators. We characterise such classes as subgraphs of the strong product of a star and a complete graph of strongly sublinear size. In a more precise result, we show that if any hereditary graph class G\mathcal{G} admits O(n1ϵ)O(n^{1-\epsilon}) separators, then for any fixed δ(0,ϵ)\delta\in(0,\epsilon) every nn-vertex graph in G\mathcal{G} is a subgraph of the strong product of a graph HH with bounded tree-depth and a complete graph of size O(n1ϵ+δ)O(n^{1-\epsilon+\delta}). This result holds with δ=0\delta=0 if we allow HH to have tree-depth O(loglogn)O(\log\log n). Moreover, using extensions of classical isoperimetric inequalties for grids graphs, we show the dependence on δ\delta in our results and the above td(H)O(loglogn)\text{td}(H)\in O(\log\log n) bound are both best possible. We prove that nn-vertex graphs of bounded treewidth are subgraphs of the product of a graph with tree-depth tt and a complete graph of size O(n1/t)O(n^{1/t}), which is best possible. Finally, we investigate the conjecture that for any hereditary graph class G\mathcal{G} that admits O(n1ϵ)O(n^{1-\epsilon}) separators, every nn-vertex graph in G\mathcal{G} is a subgraph of the strong product of a graph HH with bounded tree-width and a complete graph of size O(n1ϵ)O(n^{1-\epsilon}). We prove this for various classes G\mathcal{G} of interest.Comment: v2: added bad news subsection; v3: removed section "Polynomial Expansion Classes" which had an error, added section "Lower Bounds", and added a new author; v4: minor revisions and corrections

    Volumetric Benchmarking of Error Mitigation with Qermit

    Get PDF
    The detrimental effect of noise accumulates as quantum computers grow in size. In the case where devices are too small or noisy to perform error correction, error mitigation may be used. Error mitigation does not increase the fidelity of quantum states, but instead aims to reduce the approximation error in quantities of concern, such as expectation values of observables. However, it is as yet unclear which circuit types, and devices of which characteristics, benefit most from the use of error mitigation. Here we develop a methodology to assess the performance of quantum error mitigation techniques. Our benchmarks are volumetric in design, and are performed on different superconducting hardware devices. Extensive classical simulations are also used for comparison. We use these benchmarks to identify disconnects between the predicted and practical performance of error mitigation protocols, and to identify the situations in which their use is beneficial. To perform these experiments, and for the benefit of the wider community, we introduce Qermit - an open source python package for quantum error mitigation. Qermit supports a wide range of error mitigation methods, is easily extensible and has a modular graph-based software design that facilitates composition of error mitigation protocols and subroutines.Comment: 25 pages, Comments welcom

    Efficient Out-of-Core Algorithms for Linear Relaxation Using Blocking Covers

    Get PDF
    AbstractWhen a numerical computation fails to fit in the primary memory of a serial or parallel computer, a so-called “out-of-core” algorithm, which moves data between primary and secondary memories, must be used. In this paper, we study out-of-core algorithms for sparse linear relaxation problems in which each iteration of the algorithm updates the state of every vertex in a graph with a linear combination of the states of its neighbors. We give a general method that can save substantially on the I/O traffic for many problems. For example, our technique allows a computer withMwords of primary memory to performT=Ω(M1/5) cycles of a multigrid algorithm for a two-dimensional elliptic solver over an n-point domain using onlyΘ(nT/M1/5) I/O transfers, as compared with the naive algorithm which requiresΩ(nT) I/O's. Our method depends on the existence of a “blocking” cover of the graph that underlies the linear relaxation. A blocking cover has the property that the subgraphs forming the cover have large diameters once a small number of vertices have been removed. The key idea in our method is to introduce a variable for each removed vertex for each time step of the algorithm. We maintain linear dependences among the removed vertices, thereby allowing each subgraph to be iteratively relaxed without external communication. We give a general theorem relating blocking covers to I/O-efficient relaxation schemes. We also give an automatic method for finding blocking covers for certain classes of graphs, including planar graphs andd-dimensional simplicial graphs with constant aspect ratio (i.e., graphs that arise from dividingd-space into “well-shaped” polyhedra). As a result, we can performTiterations of linear relaxation on anyn-vertex planar graph using onlyΘ(n+nTlgn/M1/4) I/O's or on anyn-noded-dimensional simplicial graph with constant aspect ratio using onlyΘ(n+nTlgn/MΩ(1/d)) I/O's

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks
    corecore