326 research outputs found

    Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

    Full text link
    We revisit the classic 0-1-Knapsack problem, in which we are given nn items with their weights and profits as well as a weight budget WW, and the goal is to find a subset of items of total weight at most WW that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item pmaxp_{\max}, and the largest weight of any item wmaxw_{\max}. Our main result are algorithms for 0-1-Knapsack running in time \tilde{O}(n\,w_\max\,p_\max^{2/3}) and \tilde{O}(n\,p_\max\,w_\max^{2/3}), improving upon an algorithm in time O(n\,p_\max\,w_\max) by Pisinger [J. Algorithms '99]. In the regime p_\max \approx w_\max \approx n (and WOPTn2W \approx \mathrm{OPT} \approx n^2) our algorithms are the first to break the cubic barrier n3n^3. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f ⁣:[n]Zf \colon [n] \mapsto \mathbf{Z} is Δ\Delta-near convex with Δ1\Delta \geq 1, if there is a convex function f˘\breve{f} such that f˘(i)f(i)f˘(i)+Δ\breve{f}(i) \leq f(i) \leq \breve{f}(i) + \Delta for every ii. We design an algorithm computing the min-plus convolution of two Δ\Delta-near convex functions in time O~(nΔ)\tilde{O}(n\Delta). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability

    Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution

    Get PDF
    We revisit the classic 0-1-Knapsack problem, in which we are given nn items with their weights and profits as well as a weight budget WW, and the goal is to find a subset of items of total weight at most WW that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item pmaxp_{\max}, and the largest weight of any item wmaxw_{\max}. Our main result are algorithms for 0-1-Knapsack running in time \tilde{O}(n\,w_\max\,p_\max^{2/3}) and \tilde{O}(n\,p_\max\,w_\max^{2/3}), improving upon an algorithm in time O(n\,p_\max\,w_\max) by Pisinger [J. Algorithms '99]. In the regime p_\max \approx w_\max \approx n (and WOPTn2W \approx \mathrm{OPT} \approx n^2) our algorithms are the first to break the cubic barrier n3n^3. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f ⁣:[n]Zf \colon [n] \mapsto \mathbf{Z} is Δ\Delta-near convex with Δ1\Delta \geq 1, if there is a convex function f˘\breve{f} such that f˘(i)f(i)f˘(i)+Δ\breve{f}(i) \leq f(i) \leq \breve{f}(i) + \Delta for every ii. We design an algorithm computing the min-plus convolution of two Δ\Delta-near convex functions in time O~(nΔ)\tilde{O}(n\Delta). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability

    Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms

    Get PDF
    One of the most fundamental problems in Computer Science is the Knapsack problem. Given a set of n items with different weights and values, it asks to pick the most valuable subset whose total weight is below a capacity threshold T. Despite its wide applicability in various areas in Computer Science, Operations Research, and Finance, the best known running time for the problem is O(Tn). The main result of our work is an improved algorithm running in time O(TD), where D is the number of distinct weights. Previously, faster runtimes for Knapsack were only possible when both weights and values are bounded by M and V respectively, running in time O(nMV) [Pisinger'99]. In comparison, our algorithm implies a bound of O(nM^2) without any dependence on V, or O(nV^2) without any dependence on M. Additionally, for the unbounded Knapsack problem, we provide an algorithm running in time O(M^2) or O(V^2). Both our algorithms match recent conditional lower bounds shown for the Knapsack problem [Cygan et al'17, K\"unnemann et al'17]. We also initiate a systematic study of general capacitated dynamic programming, of which Knapsack is a core problem. This problem asks to compute the maximum weight path of length k in an edge- or node-weighted directed acyclic graph. In a graph with m edges, these problems are solvable by dynamic programming in time O(km), and we explore under which conditions the dependence on k can be eliminated. We identify large classes of graphs where this is possible and apply our results to obtain linear time algorithms for the problem of k-sparse Delta-separated sequences. The main technical innovation behind our results is identifying and exploiting concavity that appears in relaxations and subproblems of the tasks we consider

    Knapsack and Subset Sum with Small Items

    Get PDF
    Knapsack and Subset Sum are fundamental NP-hard problems in combinatorial optimization. Recently there has been a growing interest in understanding the best possible pseudopolynomial running times for these problems with respect to various parameters. In this paper we focus on the maximum item size s and the maximum item value v. We give algorithms that run in time O(n + s³) and O(n + v³) for the Knapsack problem, and in time Õ(n + s^{5/3}) for the Subset Sum problem. Our algorithms work for the more general problem variants with multiplicities, where each input item comes with a (binary encoded) multiplicity, which succinctly describes how many times the item appears in the instance. In these variants n denotes the (possibly much smaller) number of distinct items. Our results follow from combining and optimizing several diverse lines of research, notably proximity arguments for integer programming due to Eisenbrand and Weismantel (TALG 2019), fast structured (min,+)-convolution by Kellerer and Pferschy (J. Comb. Optim. 2004), and additive combinatorics methods originating from Galil and Margalit (SICOMP 1991)

    Whom to Ask? Jury Selection for Decision Making Tasks on Micro-blog Services

    Full text link
    It is universal to see people obtain knowledge on micro-blog services by asking others decision making questions. In this paper, we study the Jury Selection Problem(JSP) by utilizing crowdsourcing for decision making tasks on micro-blog services. Specifically, the problem is to enroll a subset of crowd under a limited budget, whose aggregated wisdom via Majority Voting scheme has the lowest probability of drawing a wrong answer(Jury Error Rate-JER). Due to various individual error-rates of the crowd, the calculation of JER is non-trivial. Firstly, we explicitly state that JER is the probability when the number of wrong jurors is larger than half of the size of a jury. To avoid the exponentially increasing calculation of JER, we propose two efficient algorithms and an effective bounding technique. Furthermore, we study the Jury Selection Problem on two crowdsourcing models, one is for altruistic users(AltrM) and the other is for incentive-requiring users(PayM) who require extra payment when enrolled into a task. For the AltrM model, we prove the monotonicity of JER on individual error rate and propose an efficient exact algorithm for JSP. For the PayM model, we prove the NP-hardness of JSP on PayM and propose an efficient greedy-based heuristic algorithm. Finally, we conduct a series of experiments to investigate the traits of JSP, and validate the efficiency and effectiveness of our proposed algorithms on both synthetic and real micro-blog data.Comment: VLDB201

    Faster Algorithms for Bounded Knapsack and Bounded Subset Sum Via Fine-Grained Proximity Results

    Full text link
    We investigate pseudopolynomial-time algorithms for Bounded Knapsack and Bounded Subset Sum. Recent years have seen a growing interest in settling their fine-grained complexity with respect to various parameters. For Bounded Knapsack, the number of items nn and the maximum item weight wmaxw_{\max} are two of the most natural parameters that have been studied extensively in the literature. The previous best running time in terms of nn and wmaxw_{\max} is O(n+wmax3)O(n + w^3_{\max}) [Polak, Rohwedder, Wegrzycki '21]. There is a conditional lower bound of O((n+wmax)2o(1))O((n + w_{\max})^{2-o(1)}) based on (min,+)(\min,+)-convolution hypothesis [Cygan, Mucha, Wegrzycki, Wlodarczyk '17]. We narrow the gap significantly by proposing a O~(n+wmax12/5)\tilde{O}(n + w^{12/5}_{\max})-time algorithm. Note that in the regime where wmaxnw_{\max} \approx n, our algorithm runs in O~(n12/5)\tilde{O}(n^{12/5}) time, while all the previous algorithms require Ω(n3)\Omega(n^3) time in the worst case. For Bounded Subset Sum, we give two algorithms running in O~(nwmax)\tilde{O}(nw_{\max}) and O~(n+wmax3/2)\tilde{O}(n + w^{3/2}_{\max}) time, respectively. These results match the currently best running time for 0-1 Subset Sum. Prior to our work, the best running times (in terms of nn and wmaxw_{\max}) for Bounded Subset Sum is O~(n+wmax5/3)\tilde{O}(n + w^{5/3}_{\max}) [Polak, Rohwedder, Wegrzycki '21] and O~(n+μmax1/2wmax3/2)\tilde{O}(n + \mu_{\max}^{1/2}w_{\max}^{3/2}) [implied by Bringmann '19 and Bringmann, Wellnitz '21], where μmax\mu_{\max} refers to the maximum multiplicity of item weights

    More on Change-Making and Related Problems

    Get PDF

    0-1 Knapsack in Nearly Quadratic Time

    Full text link
    We study pseudo-polynomial time algorithms for the fundamental \emph{0-1 Knapsack} problem. Recent research interest has focused on its fine-grained complexity with respect to the number of items nn and the \emph{maximum item weight} wmaxw_{\max}. Under (min,+)(\min,+)-convolution hypothesis, 0-1 Knapsack does not have O((n+wmax)2δ)O((n+w_{\max})^{2-\delta}) time algorithms (Cygan-Mucha-W\k{e}grzycki-W\l{}odarczyk 2017 and K\"{u}nnemann-Paturi-Schneider 2017). On the upper bound side, currently the fastest algorithm runs in O~(n+wmax12/5)\tilde O(n + w_{\max}^{12/5}) time (Chen, Lian, Mao, and Zhang 2023), improving the earlier O(n+wmax3)O(n + w_{\max}^3)-time algorithm by Polak, Rohwedder, and W\k{e}grzycki (2021). In this paper, we close this gap between the upper bound and the conditional lower bound (up to subpolynomial factors): - The 0-1 Knapsack problem has a deterministic algorithm in O(n+wmax2log4wmax)O(n + w_{\max}^{2}\log^4w_{\max}) time. Our algorithm combines and extends several recent structural results and algorithmic techniques from the literature on knapsack-type problems: - We generalize the "fine-grained proximity" technique of Chen, Lian, Mao, and Zhang (2023) derived from the additive-combinatorial results of Bringmann and Wellnitz (2021) on dense subset sums. This allows us to bound the support size of the useful partial solutions in the dynamic program. - To exploit the small support size, our main technical component is a vast extension of the "witness propagation" method, originally designed by Deng, Mao, and Zhong (2023) for speeding up dynamic programming in the easier unbounded knapsack settings. To extend this approach to our 0-1 setting, we use a novel pruning method, as well as the two-level color-coding of Bringmann (2017) and the SMAWK algorithm on tall matrices.Comment: This paper supersedes an earlier manuscript arXiv:2307.09454 that contained weaker results. Content from the earlier manuscript is partly incorporated into this paper. The earlier manuscript is now obsolet
    corecore