5,625 research outputs found

    Algorithms and Lower Bounds for Cycles and Walks: Small Space and Sparse Graphs

    Get PDF

    Deterministic Time-Space Tradeoffs for k-SUM

    Get PDF
    Given a set of numbers, the kk-SUM problem asks for a subset of kk numbers that sums to zero. When the numbers are integers, the time and space complexity of kk-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the kk-SUM problem which holds for both models, and has many interesting consequences. To illustrate: * 33-SUM is in deterministic time O(n2lglg(n)/lg(n))O(n^2 \lg\lg(n)/\lg(n)) and space O(nlg(n)lglg(n))O\left(\sqrt{\frac{n \lg(n)}{\lg\lg(n)}}\right). In general, any polylogarithmic-time improvement over quadratic time for 33-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. * 33-SUM is in deterministic time O(n2)O(n^2) and space O(n)O(\sqrt n), derandomizing an algorithm of Wang. * A popular conjecture states that 3-SUM requires n2o(1)n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n.51)O(n^{.51})-space algorithm for 33-SUM requires at least n2o(1)n^{2-o(1)} time on the word-RAM. * For k4k \ge 4, kk-SUM is in deterministic O(nk2+2/k)O(n^{k - 2 + 2/k}) time and O(n)O(\sqrt{n}) space

    Faster Random k-CNF Satisfiability

    Get PDF
    We describe an algorithm to solve the problem of Boolean CNF-Satisfiability when the input formula is chosen randomly. We build upon the algorithms of Sch{\"{o}}ning 1999 and Dantsin et al.~in 2002. The Sch{\"{o}}ning algorithm works by trying many possible random assignments, and for each one searching systematically in the neighborhood of that assignment for a satisfying solution. Previous algorithms for this problem run in time O(2n(1Ω(1)/k))O(2^{n (1- \Omega(1)/k)}). Our improvement is simple: we count how many clauses are satisfied by each randomly sampled assignment, and only search in the neighborhoods of assignments with abnormally many satisfied clauses. We show that assignments like these are significantly more likely to be near a satisfying assignment. This improvement saves a factor of 2nΩ(lg2k)/k2^{n \Omega(\lg^2 k)/k}, resulting in an overall runtime of O(2n(1Ω(lg2k)/k))O(2^{n (1- \Omega(\lg^2 k)/k)}) for random kk-SAT

    Quasipolynomiality of the Smallest Missing Induced Subgraph

    Full text link
    We study the problem of finding the smallest graph that does not occur as an induced subgraph of a given graph. This missing induced subgraph has at most logarithmic size and can be found by a brute-force search, in an nn-vertex graph, in time nO(logn)n^{O(\log n)}. We show that under the Exponential Time Hypothesis this quasipolynomial time bound is optimal. We also consider variations of the problem in which either the missing subgraph or the given graph comes from a restricted graph family; for instance, we prove that the smallest missing planar induced subgraph of a given planar graph can be found in polynomial time.Comment: 10 pages, 1 figure. To appear in J. Graph Algorithms Appl. This version updates an author affiliatio

    Peptide redesign for inhibition of the complement system: Targeting age-related macular degeneration.

    Get PDF
    PurposeTo redesign a complement-inhibiting peptide with the potential to become a therapeutic for dry and wet age-related macular degeneration (AMD).MethodsWe present a new potent peptide (Peptide 2) of the compstatin family. The peptide is developed by rational design, based on a mechanistic binding hypothesis, and structural and physicochemical properties derived from molecular dynamics (MD) simulation. The inhibitory activity, efficacy, and solubility of Peptide 2 are evaluated using a hemolytic assay, a human RPE cell-based assay, and ultraviolet (UV) absorption properties, respectively, and compared to the respective properties of its parent peptide (Peptide 1).ResultsThe sequence of Peptide 2 contains an arginine-serine N-terminal extension (a characteristic of parent Peptide 1) and a novel 8-polyethylene glycol (PEG) block C-terminal extension. Peptide 2 has significantly improved aqueous solubility compared to Peptide 1 and comparable complement inhibitory activity. In addition, Peptide 2 is more efficacious in inhibiting complement activation in a cell-based model that mimics the pathobiology of dry AMD.ConclusionsWe have designed a new peptide analog of compstatin that combines N-terminal polar amino acid extensions and C-terminal PEGylation extensions. This peptide demonstrates significantly improved aqueous solubility and complement inhibitory efficacy, compared to the parent peptide. The new peptide overcomes the aggregation limitation for clinical translation of previous compstatin analogs and is a candidate to become a therapeutic for the treatment of AMD

    Dynamic Boolean Formula Evaluation

    Get PDF
    We present a linear space data structure for Dynamic Evaluation of k-CNF Boolean Formulas which achieves O(m^{1-1/k}) query and variable update time where m is the number of clauses in the formula and clauses are of size at most a constant k. Our algorithm is additionally able to count the total number of satisfied clauses. We then show how this data structure can be parallelized in the PRAM model to achieve O(log m) span (i.e. parallel time) and still O(m^{1-1/k}) work. This parallel algorithm works in the stronger Binary Fork model. We then give a series of lower bounds on the problem including an average-case result showing the lower bounds hold even when the updates to the variables are chosen at random. Specifically, a reduction from k-Clique shows that dynamically counting the number of satisfied clauses takes time at least n^{(2?-3)/6 ?{2k} -1 -o(?k)}, where 2 ? ? < 2.38 is the matrix multiplication constant. We show the Combinatorial k-Clique Hypothesis implies a lower bound of m^{(1-k^{-1/2})(1-o(1))} which suggests our algorithm is close to optimal without involving Matrix Multiplication or new techniques. We next give an average-case reduction to k-clique showing the prior lower bounds hold even when the updates are chosen at random. We use our conditional lower bound to show any Binary Fork algorithm solving these problems requires at least ?(log m) span, which is tight against our algorithm in this model. Finally, we give an unconditional linear space lower bound for Dynamic k-CNF Boolean Formula Evaluation

    Conditional Hardness for Sensitivity Problems

    Get PDF
    In recent years it has become popular to study dynamic problems in a sensitivity setting: Instead of allowing for an arbitrary sequence of updates, the sensitivity model only allows to apply batch updates of small size to the original input data. The sensitivity model is particularly appealing since recent strong conditional lower bounds ruled out fast algorithms for many dynamic problems, such as shortest paths, reachability, or subgraph connectivity. In this paper we prove conditional lower bounds for these and additional problems in a sensitivity setting. For example, we show that under the Boolean Matrix Multiplication (BMM) conjecture combinatorial algorithms cannot compute the (4/3-varepsilon)-approximate diameter of an undirected unweighted dense graph with truly subcubic preprocessing time and truly subquadratic update/query time. This result is surprising since in the static setting it is not clear whether a reduction from BMM to diameter is possible. We further show under the BMM conjecture that many problems, such as reachability or approximate shortest paths, cannot be solved faster than by recomputation from scratch even after only one or two edge insertions. We extend our reduction from BMM to Diameter to give a reduction from All Pairs Shortest Paths to Diameter under one deletion in weighted graphs. This is intriguing, as in the static setting it is a big open problem whether Diameter is as hard as APSP. We further get a nearly tight lower bound for shortest paths after two edge deletions based on the APSP conjecture. We give more lower bounds under the Strong Exponential Time Hypothesis. Many of our lower bounds also hold for static oracle data structures where no sensitivity is required. Finally, we give the first algorithm for the (1+varepsilon)-approximate radius, diameter, and eccentricity problems in directed or undirected unweighted graphs in case of single edges failures. The algorithm has a truly subcubic running time for graphs with a truly subquadratic number of edges; it is tight w.r.t. the conditional lower bounds we obtain
    corecore