12 research outputs found

    Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

    Get PDF
    We revisit the classic 0-1-Knapsack problem, in which we are given nn items with their weights and profits as well as a weight budget WW, and the goal is to find a subset of items of total weight at most WW that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item pmaxp_{\max}, and the largest weight of any item wmaxw_{\max}. Our main result are algorithms for 0-1-Knapsack running in time \tilde{O}(n\,w_\max\,p_\max^{2/3}) and \tilde{O}(n\,p_\max\,w_\max^{2/3}), improving upon an algorithm in time O(n\,p_\max\,w_\max) by Pisinger [J. Algorithms '99]. In the regime p_\max \approx w_\max \approx n (and WOPTn2W \approx \mathrm{OPT} \approx n^2) our algorithms are the first to break the cubic barrier n3n^3. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f ⁣:[n]Zf \colon [n] \mapsto \mathbf{Z} is Δ\Delta-near convex with Δ1\Delta \geq 1, if there is a convex function f˘\breve{f} such that f˘(i)f(i)f˘(i)+Δ\breve{f}(i) \leq f(i) \leq \breve{f}(i) + \Delta for every ii. We design an algorithm computing the min-plus convolution of two Δ\Delta-near convex functions in time O~(nΔ)\tilde{O}(n\Delta). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability

    Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

    Full text link
    We revisit the classic 0-1-Knapsack problem, in which we are given nn items with their weights and profits as well as a weight budget WW, and the goal is to find a subset of items of total weight at most WW that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item pmaxp_{\max}, and the largest weight of any item wmaxw_{\max}. Our main result are algorithms for 0-1-Knapsack running in time \tilde{O}(n\,w_\max\,p_\max^{2/3}) and \tilde{O}(n\,p_\max\,w_\max^{2/3}), improving upon an algorithm in time O(n\,p_\max\,w_\max) by Pisinger [J. Algorithms '99]. In the regime p_\max \approx w_\max \approx n (and WOPTn2W \approx \mathrm{OPT} \approx n^2) our algorithms are the first to break the cubic barrier n3n^3. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f ⁣:[n]Zf \colon [n] \mapsto \mathbf{Z} is Δ\Delta-near convex with Δ1\Delta \geq 1, if there is a convex function f˘\breve{f} such that f˘(i)f(i)f˘(i)+Δ\breve{f}(i) \leq f(i) \leq \breve{f}(i) + \Delta for every ii. We design an algorithm computing the min-plus convolution of two Δ\Delta-near convex functions in time O~(nΔ)\tilde{O}(n\Delta). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability

    Fine-Grained Completeness for Optimization in P

    Get PDF

    Negative-Weight Single-Source Shortest Paths in Near-Linear Time: Now Faster!

    Full text link
    In this work we revisit the fundamental Single-Source Shortest Paths (SSSP) problem with possibly negative edge weights. A recent breakthrough result by Bernstein, Nanongkai and Wulff-Nilsen established a near-linear O(mlog8(n)log(W))O(m \log^8(n) \log(W))-time algorithm for negative-weight SSSP, where WW is an upper bound on the magnitude of the smallest negative-weight edge. In this work we improve the running time to O(mlog2(n)log(nW)loglogn)O(m \log^2(n) \log(nW) \log\log n), which is an improvement by nearly six log-factors. Some of these log-factors are easy to shave (e.g. replacing the priority queue used in Dijkstra's algorithm), while others are significantly more involved (e.g. to find negative cycles we design an algorithm reminiscent of noisy binary search and analyze it with drift analysis). As side results, we obtain an algorithm to compute the minimum cycle mean in the same running time as well as a new construction for computing Low-Diameter Decompositions in directed graphs

    Fine-Grained Completeness for Optimization in P

    Get PDF
    We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the kk-XOR problem. Specifically, we define MaxSP as the class of problems definable as maxx1,,xk#{(y1,,y):ϕ(x1,,xk,y1,,y)}\max_{x_1,\dots,x_k} \#\{ (y_1,\dots,y_\ell) : \phi(x_1,\dots,x_k, y_1,\dots,y_\ell) \}, where ϕ\phi is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On mm-sized structures, we can solve each such problem in time O(mk+1)O(m^{k+\ell-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(mk+1)O(m^{k+\ell-1}) for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic cc-approximation would give a (c+ε)(c+\varepsilon)-approximation for all MaxSP/MinSP problems in time O(mk+1δ)O(m^{k+\ell-1-\delta}), where ε>0\varepsilon > 0 can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a O(1)O(1)-approximation for all MinSP problems in faster-than-O(mk+1)O(m^{k+\ell-1}) time.Comment: Full version of APPROX'21 paper, abstract shortened to fit ArXiv requirement

    Albiglutide and cardiovascular outcomes in patients with type 2 diabetes and cardiovascular disease (Harmony Outcomes): a double-blind, randomised placebo-controlled trial

    Get PDF
    Background: Glucagon-like peptide 1 receptor agonists differ in chemical structure, duration of action, and in their effects on clinical outcomes. The cardiovascular effects of once-weekly albiglutide in type 2 diabetes are unknown. We aimed to determine the safety and efficacy of albiglutide in preventing cardiovascular death, myocardial infarction, or stroke. Methods: We did a double-blind, randomised, placebo-controlled trial in 610 sites across 28 countries. We randomly assigned patients aged 40 years and older with type 2 diabetes and cardiovascular disease (at a 1:1 ratio) to groups that either received a subcutaneous injection of albiglutide (30–50 mg, based on glycaemic response and tolerability) or of a matched volume of placebo once a week, in addition to their standard care. Investigators used an interactive voice or web response system to obtain treatment assignment, and patients and all study investigators were masked to their treatment allocation. We hypothesised that albiglutide would be non-inferior to placebo for the primary outcome of the first occurrence of cardiovascular death, myocardial infarction, or stroke, which was assessed in the intention-to-treat population. If non-inferiority was confirmed by an upper limit of the 95% CI for a hazard ratio of less than 1·30, closed testing for superiority was prespecified. This study is registered with ClinicalTrials.gov, number NCT02465515. Findings: Patients were screened between July 1, 2015, and Nov 24, 2016. 10 793 patients were screened and 9463 participants were enrolled and randomly assigned to groups: 4731 patients were assigned to receive albiglutide and 4732 patients to receive placebo. On Nov 8, 2017, it was determined that 611 primary endpoints and a median follow-up of at least 1·5 years had accrued, and participants returned for a final visit and discontinuation from study treatment; the last patient visit was on March 12, 2018. These 9463 patients, the intention-to-treat population, were evaluated for a median duration of 1·6 years and were assessed for the primary outcome. The primary composite outcome occurred in 338 (7%) of 4731 patients at an incidence rate of 4·6 events per 100 person-years in the albiglutide group and in 428 (9%) of 4732 patients at an incidence rate of 5·9 events per 100 person-years in the placebo group (hazard ratio 0·78, 95% CI 0·68–0·90), which indicated that albiglutide was superior to placebo (p<0·0001 for non-inferiority; p=0·0006 for superiority). The incidence of acute pancreatitis (ten patients in the albiglutide group and seven patients in the placebo group), pancreatic cancer (six patients in the albiglutide group and five patients in the placebo group), medullary thyroid carcinoma (zero patients in both groups), and other serious adverse events did not differ between the two groups. There were three (<1%) deaths in the placebo group that were assessed by investigators, who were masked to study drug assignment, to be treatment-related and two (<1%) deaths in the albiglutide group. Interpretation: In patients with type 2 diabetes and cardiovascular disease, albiglutide was superior to placebo with respect to major adverse cardiovascular events. Evidence-based glucagon-like peptide 1 receptor agonists should therefore be considered as part of a comprehensive strategy to reduce the risk of cardiovascular events in patients with type 2 diabetes. Funding: GlaxoSmithKline

    Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

    Get PDF

    A Structural Investigation of the Approximability of Polynomial-Time Problems

    Get PDF

    Improved Sublinear-Time Edit Distance for Preprocessed Strings

    Get PDF
    We study the problem of approximating the edit distance of two strings in sublinear time, in a setting where one or both string(s) are preprocessed, as initiated by Goldenberg, Rubinstein, Saha (STOC \u2720). Specifically, in the (k, K)-gap edit distance problem, the goal is to distinguish whether the edit distance of two strings is at most k or at least K. We obtain the following results: - After preprocessing one string in time n^{1+o(1)}, we can solve (k, k ? n^o(1))-gap-gap edit distance in time (n/k + k) ? n^o(1). - After preprocessing both strings separately in time n^{1+o(1)}, we can solve (k, k ? n^o(1))-gap edit distance in time kn^o(1). Both results improve upon some previously best known result, with respect to either the gap or the query time or the preprocessing time. Our algorithms build on the framework by Andoni, Krauthgamer and Onak (FOCS \u2710) and the recent sublinear-time algorithm by Bringmann, Cassis, Fischer and Nakos (STOC \u2722). We replace many complicated parts in their algorithm by faster and simpler solutions which exploit the preprocessing

    Optimal Algorithms for Bounded Weighted Edit Distance

    Full text link
    The edit distance of two strings is the minimum number of insertions, deletions, and substitutions of characters needed to transform one string into the other. The textbook dynamic-programming algorithm computes the edit distance of two length-nn strings in O(n2)O(n^2) time, which is optimal up to subpolynomial factors under SETH. An established way of circumventing this hardness is to consider the bounded setting, where the running time is parameterized by the edit distance kk. A celebrated algorithm by Landau and Vishkin (JCSS '88) achieves time O(n+k2)O(n + k^2), which is optimal as a function of nn and kk. Most practical applications rely on a more general weighted edit distance, where each edit has a weight depending on its type and the involved characters from the alphabet Σ\Sigma. This is formalized through a weight function w:Σ{ε}×Σ{ε}Rw : \Sigma\cup\{\varepsilon\}\times\Sigma\cup\{\varepsilon\}\to\mathbb{R} normalized so that w(a,a)=0w(a,a)=0 and w(a,b)1w(a,b)\geq 1 for all a,bΣ{ε}a,b \in \Sigma\cup\{\varepsilon\} with aba \neq b; the goal is to find an alignment of the two strings minimizing the total weight of edits. The O(n2)O(n^2)-time algorithm supports this setting seamlessly, but only very recently, Das, Gilbert, Hajiaghayi, Kociumaka, and Saha (STOC '23) gave the first non-trivial algorithm for the bounded version, achieving time O(n+k5)O(n + k^5). While this running time is linear for kn1/5k\le n^{1/5}, it is still very far from the bound O(n+k2)O(n+k^2) achievable in the unweighted setting. In this paper, we essentially close this gap by showing both an improved O~(n+nk3)\tilde O(n+\sqrt{nk^3})-time algorithm and, more surprisingly, a matching lower bound: Conditioned on the All-Pairs Shortest Paths (APSP) hypothesis, our running time is optimal for nkn\sqrt{n}\le k\le n (up to subpolynomial factors). This is the first separation between the complexity of the weighted and unweighted edit distance problems.Comment: Shortened abstract for arXi
    corecore