37 research outputs found

    Optimal covers with Hamilton cycles in random graphs

    Full text link
    A packing of a graph G with Hamilton cycles is a set of edge-disjoint Hamilton cycles in G. Such packings have been studied intensively and recent results imply that a largest packing of Hamilton cycles in G_n,p a.a.s. has size \lfloor delta(G_n,p) /2 \rfloor. Glebov, Krivelevich and Szab\'o recently initiated research on the `dual' problem, where one asks for a set of Hamilton cycles covering all edges of G. Our main result states that for log^{117}n / n < p < 1-n^{-1/8}, a.a.s. the edges of G_n,p can be covered by \lceil Delta(G_n,p)/2 \rceil Hamilton cycles. This is clearly optimal and improves an approximate result of Glebov, Krivelevich and Szab\'o, which holds for p > n^{-1+\eps}. Our proof is based on a result of Knox, K\"uhn and Osthus on packing Hamilton cycles in pseudorandom graphs.Comment: final version of paper (to appear in Combinatorica

    On-line Ramsey numbers of paths and cycles

    Full text link
    Consider a game played on the edge set of the infinite clique by two players, Builder and Painter. In each round, Builder chooses an edge and Painter colours it red or blue. Builder wins by creating either a red copy of GG or a blue copy of HH for some fixed graphs GG and HH. The minimum number of rounds within which Builder can win, assuming both players play perfectly, is the on-line Ramsey number r~(G,H)\tilde{r}(G,H). In this paper, we consider the case where GG is a path PkP_k. We prove that r~(P3,P+1)=5/4=r~(P3,C)\tilde{r}(P_3, P_{\ell+1}) = \lceil 5\ell/4 \rceil = \tilde{r}(P_3, C_\ell) for all 5\ell \ge 5, and determine r~(P4,P+1\tilde{r}(P_4, P_{\ell+1}) up to an additive constant for all 3\ell \ge 3. We also prove some general lower bounds for on-line Ramsey numbers of the form r~(Pk+1,H)\tilde{r}(P_{k+1},H).Comment: Preprin

    Approximately counting and sampling small witnesses using a colourful decision oracle

    Get PDF
    In this paper, we prove "black box" results for turning algorithms which decide whether or not a witness exists into algorithms to approximately count the number of witnesses, or to sample from the set of witnesses approximately uniformly, with essentially the same running time. We do so by extending the framework of Dell and Lapinskas (STOC 2018), which covers decision problems that can be expressed as edge detection in bipartite graphs given limited oracle access; our framework covers problems which can be expressed as edge detection in arbitrary k-hypergraphs given limited oracle access. (Simulating this oracle generally corresponds to invoking a decision algorithm.) This includes many key problems in both the fine-grained setting (such as k-SUM, k-OV and weighted k-Clique) and the parameterised setting (such as induced subgraphs of size k or weight-k solutions to CSPs). From an algorithmic standpoint, our results will make the development of new approximate counting algorithms substantially easier; indeed, it already yields a new state-of-the-art algorithm for approximately counting graph motifs, improving on Jerrum and Meeks (JCSS 2015) unless the input graph is very dense and the desired motif very small. Our k-hypergraph reduction framework generalises and strengthens results in the graph oracle literature due to Beame et al. (ITCS 2018) and Bhattacharya et al. (CoRR abs/1808.00691)

    Nearly optimal independence oracle algorithms for edge estimation in hypergraphs

    Full text link
    We study a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. In each of these models, we obtain oracle algorithms to approximately count the hypergraph's edges, and we unconditionally prove that no oracle algorithm for this problem can have significantly smaller worst-case oracle cost than our algorithms

    Packings and coverings with Hamilton cycles and on-line Ramsey theory

    Get PDF
    A major theme in modern graph theory is the exploration of maximal packings and minimal covers of graphs with subgraphs in some given family. We focus on packings and coverings with Hamilton cycles, and prove the following results in the area. • Let ε > 0, and let GG be a large graph on n vertices with minimum degree at least (1=2+ ε)n. We give a tight lower bound on the size of a maximal packing of GG with edge-disjoint Hamilton cycles. • Let TT be a strongly k-connected tournament. We give an almost tight lower bound on the size of a maximal packing of TT with edge-disjoint Hamilton cycles. • Let log 1^11^17^7 nn/nnpp≤1-nn^-1^1/^/8^8. We prove that GGn_n,_,p_p may a.a.s be covered by a set of ⌈Δ(GGn_n,_,p_p)/2⌉ Hamilton cycles, which is clearly best possible. In addition, we consider some problems in on-line Ramsey theory. Let r(GG,HH) denote the on-line Ramsey number of GG and HH. We conjecture the exact values of r (PPk_k,PP_ℓ) for all kk≤ℓ. We prove this conjecture for kk=2, prove it to within an additive error of 10 for kk=3, and prove an asymptotically tight lower bound for kk=4. We also determine r(PP3_3,CC_ℓ exactly for all ℓ

    Approximately counting locally-optimal structures

    Full text link
    A locally-optimal structure is a combinatorial structure such as a maximal independent set that cannot be improved by certain (greedy) local moves, even though it may not be globally optimal. It is trivial to construct an independent set in a graph. It is easy to (greedily) construct a maximal independent set. However, it is NP-hard to construct a globally-optimal (maximum) independent set. In general, constructing a locally-optimal structure is somewhat more difficult than constructing an arbitrary structure, and constructing a globally-optimal structure is more difficult than constructing a locally-optimal structure. The same situation arises with listing. The differences between the problems become obscured when we move from listing to counting because nearly everything is #P-complete. However, we highlight an interesting phenomenon that arises in approximate counting, where the situation is apparently reversed. Specifically, we show that counting maximal independent sets is complete for #P with respect to approximation-preserving reductions, whereas counting all independent sets, or counting maximum independent sets is complete for an apparently smaller class, #RHΠ1\mathrm{\#RH}\Pi_1 which has a prominent role in the complexity of approximate counting. Motivated by the difficulty of approximately counting maximal independent sets in bipartite graphs, we also study the problem of approximately counting other locally-optimal structures that arise in algorithmic applications, particularly problems involving minimal separators and minimal edge separators. Minimal separators have applications via fixed-parameter-tractable algorithms for constructing triangulations and phylogenetic trees. Although exact (exponential-time) algorithms exist for listing these structures, we show that the counting problems are #P-complete with respect to both exact and approximation-preserving reductions.Comment: Accepted to JCSS, preliminary version accepted to ICALP 2015 (Track A

    Four universal growth regimes in degree-dependent first passage percolation on spatial random graphs I

    Full text link
    One-dependent first passage percolation is a spreading process on a graph where the transmission time through each edge depends on the direct surroundings of the edge. In particular, the classical iid transmission time LxyL_{xy} is multiplied by (WxWy)μ(W_xW_y)^\mu, a polynomial of the expected degrees Wx,WyW_x, W_y of the endpoints of the edge xyxy, which we call the penalty function. Beyond the Markov case, we also allow any distribution for LxyL_{xy} with regularly varying distribution near 00. We then run this process on three spatial scale-free random graph models: finite and infinite Geometric Inhomogeneous Random Graphs, and Scale-Free Percolation. In these spatial models, the connection probability between two vertices depends on their spatial distance and on their expected degrees. We show that as the penalty-function, i.e., μ\mu increases, the transmission time between two far away vertices sweeps through four universal phases: explosive (with tight transmission times), polylogarithmic, polynomial but strictly sublinear, and linear in the Euclidean distance. The strictly polynomial growth phase here is a new phenomenon that so far was extremely rare in spatial graph models. The four growth phases are highly robust in the model parameters and are not restricted to phase boundaries. Further, the transition points between the phases depend non-trivially on the main model parameters: the tail of the degree distribution, a long-range parameter governing the presence of long edges, and the behaviour of the distribution LL near 00. In this paper we develop new methods to prove the upper bounds in all sub-explosive phases. Our companion paper complements these results by providing matching lower bounds in the polynomial and linear regimes.Comment: 78 page

    Amplifiers for the Moran Process

    Get PDF
    The Moran process, as studied by Lieberman, Hauert, and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness r > 1. All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or nonmutant) is passed on to an out-neighbour which is chosen uniformly at random. If the underlying graph is strongly connected, then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every r > 1, the extinction probability tends to 0 as the number of vertices increases. A formal definition is provided in the article. Strong amplification is a rather surprising property—it means that in such graphs, the fixation probability of a uniformly placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of r > 1 (independently of n). The name “strongly amplifying” comes from the fact that this selective advantage is “amplified.” Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly amplifying families—superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. In this article, we give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family “megastars.” When the algorithm is run on an n-vertex graph in this family, starting with a uniformly chosen mutant, the extinction probability is roughly n^(−1/2) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of n). Finally, we prove that our analysis of megastars is fairly tight—there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al

    Instability of backoff protocols with arbitrary arrival rates

    Full text link
    In contention resolution, multiple processors are trying to coordinate to send discrete messages through a shared channel with sharply limited communication. If two processors inadvertently send at the same time, the messages collide and are not transmitted successfully. An important case is acknowledgement-based contention resolution, in which processors cannot listen to the channel at all; all they know is whether or not their own messages have got through. This situation arises frequently in both networking and cloud computing. One particularly important example of an acknowledgement-based contention resolution protocol is binary exponential backoff. Variants of binary exponential backoff are used in both Ethernet and TCP/IP, and both Google Drive and AWS instruct their users to implement it to handle busy periods. In queueing models, where each processor has a queue of messages, stable acknowledgement-based protocols are already known (H{\aa}stad et al., SICOMP 1996). In queue-free models, where each processor has a single message but processors arrive randomly, it is widely conjectured that no stable acknowledgement-based protocols exist for any positive arrival rate of processors. Despite exciting recent results for full-sensing protocols which assume greater listening capabilities of the processors (see e.g. Bender et al. STOC 2020 or Chen et al. PODC 2021), this foundational question remains open even for backoff protocols unless the arrival rate of processors is at least 0.42 (Goldberg et al. SICOMP 2004). We prove the conjecture for all backoff protocols outside of a tightly-constrained special case, and set out the remaining technical obstacles to a full proof

    Stopping explosion by penalising transmission to hubs in scale-free spatial random graphs

    Full text link
    We study the spread of information in finite and infinite inhomogeneous spatial random graphs. We assume that each edge has a transmission cost that is a product of an i.i.d. random variable L and a penalty factor: edges between vertices of expected degrees w_1 and w_2 are penalised by a factor of (w_1w_2)^\mu for all \mu >0. We study this process for scale-free percolation, for (finite and infinite) Geometric Inhomogeneous Random Graphs, and for Hyperbolic Random Graphs, all with power law degree distributions with exponent \tau > 1. For \tau < 3, we find a threshold behaviour, depending on how fast the cumulative distribution function of L decays at zero. If it decays at most polynomially with exponent smaller than (3-\tau)/(2\mu) then explosion happens, i.e., with positive probability we can reach infinitely many vertices with finite cost (for the infinite models), or reach a linear fraction of all vertices with bounded costs (for the finite models). On the other hand, if the cdf of L decays at zero at least polynomially with exponent larger than (3-\tau)/(2\mu), then no explosion happens. This behaviour is arguably a better representation of information spreading processes in social networks than the case without penalising factor, in which explosion always happens unless the cdf of L is doubly exponentially flat around zero. Finally, we extend the results to other penalty functions, including arbitrary polynomials in w_1 and w_2. In some cases the interesting phenomenon occurs that the model changes behaviour (from explosive to conservative and vice versa) when we reverse the role of w_1 and w_2. Intuitively, this could corresponds to reversing the flow of information: gathering information might take much longer than sending it out
    corecore