725 research outputs found

    Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size

    Full text link
    The development of a satisfying and rigorous mathematical understanding of the performance of neural networks is a major challenge in artificial intelligence. Against this background, we study the expressive power of neural networks through the example of the classical NP-hard Knapsack Problem. Our main contribution is a class of recurrent neural networks (RNNs) with rectified linear units that are iteratively applied to each item of a Knapsack instance and thereby compute optimal or provably good solution values. We show that an RNN of depth four and width depending quadratically on the profit of an optimum Knapsack solution is sufficient to find optimum Knapsack solutions. We also prove the following tradeoff between the size of an RNN and the quality of the computed Knapsack solution: for Knapsack instances consisting of nn items, an RNN of depth five and width ww computes a solution of value at least 1−O(n2/w)1-\mathcal{O}(n^2/\sqrt{w}) times the optimum solution value. Our results build upon a classical dynamic programming formulation of the Knapsack Problem as well as a careful rounding of profit values that are also at the core of the well-known fully polynomial-time approximation scheme for the Knapsack Problem. A carefully conducted computational study qualitatively supports our theoretical size bounds. Finally, we point out that our results can be generalized to many other combinatorial optimization problems that admit dynamic programming solution methods, such as various Shortest Path Problems, the Longest Common Subsequence Problem, and the Traveling Salesperson Problem.Comment: A short version of this paper appears in the proceedings of AAAI 202

    Packing groups of items into multiple knapsacks

    Get PDF
    We consider a natural generalization of the classical multiple knapsack problem in which instead of packing single items we are packing groups of items. In this problem, we have multiple knapsacks and a set of items which are partitioned into groups. Each item has an individual weight, while the profit is associated with groups rather than items. The profit of a group can be attained if and only if every item of this group is packed. Such a general model finds applications in various practical problems, e.g., delivering bundles of goods. The tractability of this problem relies heavily on how large a group could be. Deciding if a group of items of total weight 2 could be packed into two knapsacks of unit capacity is already NP-hard and it thus rules out a constant-approximation algorithm for this problem in general. We then focus on the parameterized version where the total weight of items in each group is bounded by a factor delta of the total capacity of all knapsacks. Both approximation and inapproximability results with respect to delta are derived. We also show that, depending on whether the number of knapsacks is a constant or part of the input, the approximation ratio for the problem, as a function on delta, changes substantially, which has a clear difference from the classical multiple knapsack problem

    A parameterized approximation scheme for the 2D-Knapsack problem with wide items

    Full text link
    We study a natural geometric variant of the classic Knapsack problem called 2D-Knapsack: we are given a set of axis-parallel rectangles and a rectangular bounding box, and the goal is to pack as many of these rectangles inside the box without overlap. Naturally, this problem is NP-complete. Recently, Grandoni et al. [ESA'19] showed that it is also W[1]-hard when parameterized by the size kk of the sought packing, and they presented a parameterized approximation scheme (PAS) for the variant where we are allowed to rotate the rectangles by 90{\textdegree} before packing them into the box. Obtaining a PAS for the original 2D-Knapsack problem, without rotation, appears to be a challenging open question. In this work, we make progress towards this goal by showing a PAS under the following assumptions: - both the box and all the input rectangles have integral, polynomially bounded sidelengths; - every input rectangle is wide -- its width is greater than its height; and - the aspect ratio of the box is bounded by a constant.Our approximation scheme relies on a mix of various parameterized and approximation techniques, including color coding, rounding, and searching for a structured near-optimum packing using dynamic programming

    Budgeted Matroid Maximization: a Parameterized Viewpoint

    Full text link
    We study budgeted variants of well known maximization problems with multiple matroid constraints. Given an ℓ\ell-matchoid \cm on a ground set EE, a profit function p:E→R≥0p:E \rightarrow \mathbb{R}_{\geq 0}, a cost function c:E→R≥0c:E \rightarrow \mathbb{R}_{\geq 0}, and a budget B∈R≥0B \in \mathbb{R}_{\geq 0}, the goal is to find in the ℓ\ell-matchoid a feasible set SS of maximum profit p(S)p(S) subject to the budget constraint, i.e., c(S)≤Bc(S) \leq B. The {\em budgeted ℓ\ell-matchoid} (BM) problem includes as special cases budgeted ℓ\ell-dimensional matching and budgeted ℓ\ell-matroid intersection. A strong motivation for studying BM from parameterized viewpoint comes from the APX-hardness of unbudgeted ℓ\ell-dimensional matching (i.e., B=∞B = \infty) already for ℓ=3\ell = 3. Nevertheless, while there are known FPT algorithms for the unbudgeted variants of the above problems, the {\em budgeted} variants are studied here for the first time through the lens of parameterized complexity. We show that BM parametrized by solution size is W[1]W[1]-hard, already with a degenerate single matroid constraint. Thus, an exact parameterized algorithm is unlikely to exist, motivating the study of {\em FPT-approximation schemes} (FPAS). Our main result is an FPAS for BM (implying an FPAS for ℓ\ell-dimensional matching and budgeted ℓ\ell-matroid intersection), relying on the notion of representative set −- a small cardinality subset of elements which preserves the optimum up to a small factor. We also give a lower bound on the minimum possible size of a representative set which can be computed in polynomial time
    • …
    corecore