64 research outputs found

    Fast construction on a restricted budget

    Full text link
    We introduce a model of a controlled random graph process. In this model, the edges of the complete graph KnK_n are ordered randomly and then revealed, one by one, to a player called Builder. He must decide, immediately and irrevocably, whether to purchase each observed edge. The observation time is bounded by parameter tt, and the total budget of purchased edges is bounded by parameter bb. Builder's goal is to devise a strategy that, with high probability, allows him to construct a graph of purchased edges possessing a target graph property P\mathcal{P}, all within the limitations of observation time and total budget. We show the following: (a) Builder has a strategy to achieve minimum degree kk at the hitting time for this property by purchasing at most cknc_kn edges for an explicit ck<kc_k<k; and a strategy to achieve it (slightly) after the threshold for minimum degree kk by purchasing at most (1+ε)kn/2(1+\varepsilon)kn/2 edges (which is optimal); (b) Builder has a strategy to create a Hamilton cycle if either t(1+ε)nlogn/2t\ge(1+\varepsilon)n\log{n}/2 and bCnb\ge Cn, or tCnlognt\ge Cn\log{n} and b(1+ε)nb\ge(1+\varepsilon)n, for some C=C(ε)C=C(\varepsilon); similar results hold for perfect matching; (c) Builder has a strategy to create a copy of a given kk-vertex tree if tb{(n/t)k2,1}t\ge b\gg\{(n/t)^{k-2},1\}, and this is optimal; and (d) For =2k+1\ell=2k+1 or =2k+2\ell=2k+2, Builder has a strategy to create a copy of a cycle of length \ell if bmax{nk+2/tk+1,n/t}b\gg\max\{n^{k+2}/t^{k+1},n/\sqrt{t}\}, and this is optimal.Comment: 20 pages, 2 figure

    Faster Random Walks By Rewiring Online Social Networks On-The-Fly

    Full text link
    Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.Comment: 15 pages, 14 figure, technical report for ICDE2013 paper. Appendix has all the theorems' proofs; ICDE'201

    Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays

    Get PDF
    Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity

    Deterministic Decremental Reachability, SCC, and Shortest Paths via Directed Expanders and Congestion Balancing

    Full text link
    Let G=(V,E,w)G = (V,E,w) be a weighted, digraph subject to a sequence of adversarial edge deletions. In the decremental single-source reachability problem (SSR), we are given a fixed source ss and the goal is to maintain a data structure that can answer path-queries svs \rightarrowtail v for any vVv \in V. In the more general single-source shortest paths (SSSP) problem the goal is to return an approximate shortest path to vv, and in the SCC problem the goal is to maintain strongly connected components of GG and to answer path queries within each component. All of these problems have been very actively studied over the past two decades, but all the fast algorithms are randomized and, more significantly, they can only answer path queries if they assume a weaker model: they assume an oblivious adversary which is not adaptive and must fix the update sequence in advance. This assumption significantly limits the use of these data structures, most notably preventing them from being used as subroutines in static algorithms. All the above problems are notoriously difficult in the adaptive setting. In fact, the state-of-the-art is still the Even and Shiloach tree, which dates back all the way to 1981 and achieves total update time O(mn)O(mn). We present the first algorithms to break through this barrier: 1) deterministic decremental SSR/SCC with total update time mn2/3+o(1)mn^{2/3 + o(1)} 2) deterministic decremental SSSP with total update time n2+2/3+o(1)n^{2+2/3+o(1)}. To achieve these results, we develop two general techniques of broader interest for working with dynamic graphs: 1) a generalization of expander-based tools to dynamic directed graphs, and 2) a technique that we call congestion balancing and which provides a new method for maintaining flow under adversarial deletions. Using the second technique, we provide the first near-optimal algorithm for decremental bipartite matching.Comment: Reuploaded with some generalizations of previous theorem

    Factor-of-iid balanced orientation of non-amenable graphs

    Full text link
    We show that if a non-amenable, quasi-transitive, unimodular graph GG has all degrees even then it has a factor-of-iid balanced orientation, meaning each vertex has equal in- and outdegree. This result involves extending earlier spectral-theoretic results on Bernoulli shifts to the Bernoulli graphings of quasi-transitive, unimodular graphs. As a consequence, we also obtain that when GG is regular (of either odd or even degree) and bipartite, it has a factor-of-iid perfect matching. This generalizes a result of Lyons and Nazarov beyond transitive graphs.Comment: 24 pages, 1 figure. This is one of two papers that are replacing the shorter arXiv submission arXiv:2101.12577v1 Factor of iid Schreier decoration of transitive graph
    corecore