48 research outputs found

    Small Space Stream Summary for Matroid Center

    Get PDF
    In the matroid center problem, which generalizes the k-center problem, we need to pick a set of centers that is an independent set of a matroid with rank r. We study this problem in streaming, where elements of the ground set arrive in the stream. We first show that any randomized one-pass streaming algorithm that computes a better than Delta-approximation for partition-matroid center must use Omega(r^2) bits of space, where Delta is the aspect ratio of the metric and can be arbitrarily large. This shows a quadratic separation between matroid center and k-center, for which the Doubling algorithm [Charikar et al., 1997] gives an 8-approximation using O(k)-space and one pass. To complement this, we give a one-pass algorithm for matroid center that stores at most O(r^2 log(1/epsilon)/epsilon) points (viz., stream summary) among which a (7+epsilon)-approximate solution exists, which can be found by brute force, or a (17+epsilon)-approximation can be found with an efficient algorithm. If we are allowed a second pass, we can compute a (3+epsilon)-approximation efficiently. We also consider the problem of matroid center with z outliers and give a one-pass algorithm that outputs a set of O((r^2+rz)log(1/epsilon)/epsilon) points that contains a (15+epsilon)-approximate solution. Our techniques extend to knapsack center and knapsack center with z outliers in a straightforward way, and we get algorithms that use space linear in the size of a largest feasible set (as opposed to quadratic space for matroid center)

    Maximum Matching in Two, Three, and a Few More Passes Over Graph Streams

    Get PDF
    We consider the maximum matching problem in the semi-streaming model formalized by Feigenbaum, Kannan, McGregor, Suri, and Zhang that is inspired by giant graphs of today. As our main result, we give a two-pass (1/2 + 1/16)-approximation algorithm for triangle-free graphs and a two-pass (1/2 + 1/32)-approximation algorithm for general graphs; these improve the approximation ratios of 1/2 + 1/52 for bipartite graphs and 1/2 + 1/140 for general graphs by Konrad, Magniez, and Mathieu. In three passes, we achieve approximation ratios of 1/2 + 1/10 for triangle-free graphs and 1/2 + 1/19.753 for general graphs. We also give a multi-pass algorithm where we bound the number of passes precisely - we give a (2/3 - epsilon)-approximation algorithm that uses 2/(3 epsilon) passes for triangle-free graphs and 4/(3 epsilon) passes for general graphs. Our algorithms are simple and combinatorial, use O(n log(n)) space, and have O(1) update time per edge. For general graphs, our multi-pass algorithm improves the best known deterministic algorithms in terms of the number of passes: * Ahn and Guha give a (2/3 - epsilon)-approximation algorithm that uses O(log(1/epsilon)/epsilon^2) passes, whereas our (2/3 - epsilon)-approximation algorithm uses 4/(epsilon) passes; * they also give a (1 - epsilon)-approximation algorithm that uses O(log(n) poly(1/epsilon)) passes, where n is the number of vertices of the input graph; although our algorithm is (2/3 - epsilon)-approximation, our number of passes do not depend on n. Earlier multi-pass algorithms either have a large constant inside big-O notation for the number of passes or the constant cannot be determined due to the involved analysis, so our multi-pass algorithm should use much fewer passes for approximation ratios bounded slightly below 2/3

    Fully-Dynamic Coresets

    Get PDF
    With input sizes becoming massive, coresets -- small yet representative summary of the input -- are relevant more than ever. A weighted set CwC_w that is a subset of the input is an ε\varepsilon-coreset if the cost of any feasible solution SS with respect to CwC_w is within [1±ε][1 {\pm} \varepsilon] of the cost of SS with respect to the original input. We give a very general technique to compute coresets in the fully-dynamic setting where input points can be added or deleted. Given a static ε\varepsilon-coreset algorithm that runs in time t(n,ε,λ)t(n, \varepsilon, \lambda) and computes a coreset of size s(n,ε,λ)s(n, \varepsilon, \lambda), where nn is the number of input points and 1λ1 {-}\lambda is the success probability, we give a fully-dynamic algorithm that computes an ε\varepsilon-coreset with worst-case update time O((logn)t(s(n,ε/logn,λ/n),ε/logn,λ/n))O((\log n) \cdot t(s(n, \varepsilon/\log n, \lambda/n), \varepsilon/\log n, \lambda/n) ) (this bound is stated informally), where the success probability is 1λ1{-}\lambda. Our technique is a fully-dynamic analog of the merge-and-reduce technique that applies to insertion-only setting. Although our space usage is O(n)O(n), we work in the presence of an adaptive adversary, and we show that Ω(n)\Omega(n) space is required when adversary is adaptive. As a consequence, we get fully-dynamic ε\varepsilon-coreset algorithms for kk-median and kk-means with worst-case update time O(ε2k2log5nlog3k)O(\varepsilon^{-2}k^2\log^5 n \log^3 k) and coreset size O(ε2klognlog2k)O(\varepsilon^{-2}k\log n \log^2 k) ignoring loglogn\log \log n and log(1/ε)\log(1/\varepsilon) factors and assuming that ε,λ=Ω(1/\varepsilon, \lambda = \Omega(1/poly(n))(n)). These are the first fully-dynamic algorithms for kk-median and kk-means with worst-case update times O(O(poly(k,logn,ε1))(k, \log n, \varepsilon^{-1})). We also give conditional lower bound on update/query time for any fully-dynamic (4δ)(4 - \delta)-approximation algorithm for kk-means.Comment: Added missed important reference. Abstract is shortene

    Robust Algorithms Under Adversarial Injections

    Get PDF
    In this paper, we study streaming and online algorithms in the context of randomness in the input. For several problems, a random order of the input sequence - as opposed to the worst-case order - appears to be a necessary evil in order to prove satisfying guarantees. However, algorithmic techniques that work under this assumption tend to be vulnerable to even small changes in the distribution. For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. We investigate two classical combinatorial-optimization problems in this model: Maximum matching and cardinality constrained monotone submodular function maximization. Our main technical contribution is a novel streaming algorithm for the latter that computes a 0.55-approximation. While the algorithm itself is clean and simple, an involved analysis shows that it emulates a subdivision of the input stream which can be used to greatly limit the power of the adversary

    Faster Algorithms for Bounded Liveness in Graphs and Game Graphs

    Get PDF
    Graphs and games on graphs are fundamental models for the analysis of reactive systems, in particular, for model-checking and the synthesis of reactive systems. The class of ω-regular languages provides a robust specification formalism for the desired properties of reactive systems. In the classical infinitary formulation of the liveness part of an ω-regular specification, a "good" event must happen eventually without any bound between the good events. A stronger notion of liveness is bounded liveness, which requires that good events happen within d transitions. Given a graph or a game graph with n vertices, m edges, and a bounded liveness objective, the previous best-known algorithmic bounds are as follows: (i) O(dm) for graphs, which in the worst-case is O(n³); and (ii) O(n² d²) for games on graphs. Our main contributions improve these long-standing algorithmic bounds. For graphs we present: (i) a randomized algorithm with one-sided error with running time O(n^{2.5} log n) for the bounded liveness objectives; and (ii) a deterministic linear-time algorithm for the complement of bounded liveness objectives. For games on graphs, we present an O(n² d) time algorithm for the bounded liveness objectives

    Beating Greedy for Stochastic Bipartite Matching

    Full text link
    We consider the maximum bipartite matching problem in stochastic settings, namely the query-commit and price-of-information models. In the query-commit model, an edge e independently exists with probability pep_e. We can query whether an edge exists or not, but if it does exist, then we have to take it into our solution. In the unweighted case, one can query edges in the order given by the classical online algorithm of Karp, Vazirani, and Vazirani to get a (11/e)(1-1/e)-approximation. In contrast, the previously best known algorithm in the weighted case is the (1/2)(1/2)-approximation achieved by the greedy algorithm that sorts the edges according to their weights and queries in that order. Improving upon the basic greedy, we give a (11/e)(1-1/e)-approximation algorithm in the weighted query-commit model. We use a linear program (LP) to upper bound the optimum achieved by any strategy. The proposed LP admits several structural properties that play a crucial role in the design and analysis of our algorithm. We also extend these techniques to get a (11/e)(1-1/e)-approximation algorithm for maximum bipartite matching in the price-of-information model introduced by Singla, who also used the basic greedy algorithm to give a (1/2)(1/2)-approximation.Comment: Published in ACM-SIAM Symposium on Discrete Algorithms (SODA19

    Constructive plaquette compilation for the parity architecture

    Full text link
    Parity compilation is the challenge of laying out the required constraints for the parity mapping in a local way. We present the first constructive compilation algorithm for the parity architecture using plaquettes for arbitrary higher-order optimization problems. This enables adiabatic protocols, where the plaquette layout can natively be implemented, as well as fully parallelized digital circuits. The algorithm builds a rectangular layout of plaquettes, where in each layer of the rectangle at least one constraint is added. The core idea is that each constraint, consisting of any qubits on the boundary of the rectangle and some new qubits, can be decomposed into plaquettes with a deterministic procedure using ancillas. We show how to pick a valid set of constraints and how this decomposition works. We further give ways to optimize the ancilla count and show how to implement optimization problems with additional constraints.Comment: 8 pages, 5 figure

    Blimp-1 is essential for allergen-induced asthma and Th2 cell development in the lung

    Get PDF
    A Th2 immune response is central to allergic airway inflammation, which afflicts millions worldwide. However, the mechanisms that augment GATA3 expression in an antigen-primed developing Th2 cell are not well understood. Here, we describe an unexpected role for Blimp-1, a transcriptional repressor that constrains autoimmunity, as an upstream promoter of GATA3 expression that is critical for Th2 cell development in the lung to inhaled but not systemically delivered allergens but is dispensable for TFH function and IgE production. Mechanistically, Blimp-1 acts through Bcl6, leading to increased GATA3 expression in lung Th2 cells. Surprisingly, the anti-inflammatory cytokine IL-10, but not the pro-inflammatory cytokines IL-6 or IL-21, is required via STAT3 activation to up-regulate Blimp-1 and promote Th2 cell development. These data reveal a hitherto unappreciated role for an IL-10-STAT3-Blimp-1 circuit as an initiator of an inflammatory Th2 response in the lung to allergens. Thus, Blimp-1 in a context-dependent fashion can drive inflammation by promoting rather than terminating effector T cell responses
    corecore