5,688 research outputs found
Improvement of the branch and bound algorithm for solving the knapsack linear integer problem
The paper presents a new reformulation approach to reduce the complexity of a branch and bound algorithm for solving the knapsack linear integer problem. The branch and bound algorithm in general relies on the usual strategy of first relaxing the integer problem into a linear programing (LP) model. If the linear programming optimal solution is integer then, the optimal solution to the integer problem is available. If the linear programming optimal solution is not integer, then a variable with a fractional value is selected to create two sub-problems such that part of the feasible region is discarded without eliminating any of the feasible integer solutions. The process is repeated on all variables with fractional values until an integer solution is found. In this approach variable sum and additional constraints are generated and added to the original problem before solving. In order to do this the objective bound of knapsack problem is quickly determined. The bound is then used to generate a set of variable sum limits and four additional constraints. From the variable sum limits, initial sub-problems are constructed and solved. The optimal solution is then obtained as the best solution from all the sub-problems in terms of the objective value. The proposed procedure results in sub-problems that have reduced complexity and easier to solve than the original problem in terms of numbers of branch and bound iterations or sub-problems.The knapsack problem is a special form of the general linear integer problem. There are so many types of knapsack problems. These include the zero-one, multiple, multiple-choice, bounded, unbounded, quadratic, multi-objective, multi-dimensional, collapsing zero-one and set union knapsack problems. The zero-one knapsack problem is one in which the variables assume 0 s and 1 s only. The reason is that an item can be chosen or not chosen. In other words there is no way it is possible to have fractional amounts or items. This is the easiest class of the knapsack problems and is the only one that can be solved in polynomial by interior point algorithms and in pseudo-polynomial time by dynamic programming approaches. The multiple-choice knapsack problem is a generalization of the ordinary knapsack problem, where the set of items is partitioned into classes. The zero-one choice of taking an item is replaced by the selection of exactly one item out of each class of item
Improvement of the branch and bound algorithm for solving the knapsack linear integer problem
The paper presents a new reformulation approach to reduce the complexity of a branch and bound algorithm for solving the knapsack linear integer problem. The branch and bound algorithm in general relies on the usual strategy of first relaxing the integer problem into a linear programing (LP) model. If the linear programming optimal solution is integer then, the optimal solution to the integer problem is available. If the linear programming optimal solution is not integer, then a variable with a fractional value is selected to create two sub-problems such that part of the feasible region is discarded without eliminating any of the feasible integer solutions. The process is repeated on all variables with fractional values until an integer solution is found. In this approach variable sum and additional constraints are generated and added to the original problem before solving. In order to do this the objective bound of knapsack problem is quickly determined. The bound is then used to generate a set of variable sum limits and four additional constraints. From the variable sum limits, initial sub-problems are constructed and solved. The optimal solution is then obtained as the best solution from all the sub-problems in terms of the objective value. The proposed procedure results in sub-problems that have reduced complexity and easier to solve than the original problem in terms of numbers of branch and bound iterations or sub-problems.The knapsack problem is a special form of the general linear integer problem. There are so many types of knapsack problems. These include the zero-one, multiple, multiple-choice, bounded, unbounded, quadratic, multi-objective, multi-dimensional, collapsing zero-one and set union knapsack problems. The zero-one knapsack problem is one in which the variables assume 0 s and 1 s only. The reason is that an item can be chosen or not chosen. In other words there is no way it is possible to have fractional amounts or items. This is the easiest class of the knapsack problems and is the only one that can be solved in polynomial by interior point algorithms and in pseudo-polynomial time by dynamic programming approaches. The multiple-choice knapsack problem is a generalization of the ordinary knapsack problem, where the set of items is partitioned into classes. The zero-one choice of taking an item is replaced by the selection of exactly one item out of each class of item
Sparse grid quadrature on products of spheres
We examine sparse grid quadrature on weighted tensor products (WTP) of
reproducing kernel Hilbert spaces on products of the unit sphere, in the case
of worst case quadrature error for rules with arbitrary quadrature weights. We
describe a dimension adaptive quadrature algorithm based on an algorithm of
Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowski's
WTP algorithm (1999), here called the WW algorithm. We prove that the dimension
adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no
greater in cost than the WW algorithm. Both algorithms therefore have the
optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and
Wozniakowski (1999). A numerical example shows that, even though the asymptotic
convergence rate is optimal, if the dimension weights decay slowly enough, and
the dimensionality of the problem is large enough, the initial convergence of
the dimension adaptive algorithm can be slow.Comment: 34 pages, 6 figures. Accepted 7 January 2015 for publication in
Numerical Algorithms. Revised at page proof stage to (1) update email
address; (2) correct the accent on "Wozniakowski" on p. 7; (3) update
reference 2; (4) correct references 3, 18 and 2
MaxHedge: Maximising a Maximum Online
We introduce a new online learning framework where, at each trial, the
learner is required to select a subset of actions from a given known action
set. Each action is associated with an energy value, a reward and a cost. The
sum of the energies of the actions selected cannot exceed a given energy
budget. The goal is to maximise the cumulative profit, where the profit
obtained on a single trial is defined as the difference between the maximum
reward among the selected actions and the sum of their costs. Action energy
values and the budget are known and fixed. All rewards and costs associated
with each action change over time and are revealed at each trial only after the
learner's selection of actions. Our framework encompasses several online
learning problems where the environment changes over time; and the solution
trades-off between minimising the costs and maximising the maximum reward of
the selected subset of actions, while being constrained to an action energy
budget. The algorithm that we propose is efficient and general in that it may
be specialised to multiple natural online combinatorial problems.Comment: Published in AISTATS 201
Proximity results and faster algorithms for Integer Programming using the Steinitz Lemma
We consider integer programming problems in standard form where , and . We show that such an integer program can be solved in time , where is an upper bound on each
absolute value of an entry in . This improves upon the longstanding best
bound of Papadimitriou (1981) of , where in addition,
the absolute values of the entries of also need to be bounded by .
Our result relies on a lemma of Steinitz that states that a set of vectors in
that is contained in the unit ball of a norm and that sum up to zero can
be ordered such that all partial sums are of norm bounded by . We also use
the Steinitz lemma to show that the -distance of an optimal integer and
fractional solution, also under the presence of upper bounds on the variables,
is bounded by . Here is again an
upper bound on the absolute values of the entries of . The novel strength of
our bound is that it is independent of . We provide evidence for the
significance of our bound by applying it to general knapsack problems where we
obtain structural and algorithmic results that improve upon the recent
literature.Comment: We achieve much milder dependence of the running time on the largest
entry in $b
Dependent randomized rounding for clustering and partition systems with knapsack constraints
Clustering problems are fundamental to unsupervised learning. There is an
increased emphasis on fairness in machine learning and AI; one representative
notion of fairness is that no single demographic group should be
over-represented among the cluster-centers. This, and much more general
clustering problems, can be formulated with "knapsack" and "partition"
constraints. We develop new randomized algorithms targeting such problems, and
study two in particular: multi-knapsack median and multi-knapsack center. Our
rounding algorithms give new approximation and pseudo-approximation algorithms
for these problems. One key technical tool, which may be of independent
interest, is a new tail bound analogous to Feige (2006) for sums of random
variables with unbounded variances. Such bounds are very useful in inferring
properties of large networks using few samples
Hybrid Rounding Techniques for Knapsack Problems
We address the classical knapsack problem and a variant in which an upper
bound is imposed on the number of items that can be selected. We show that
appropriate combinations of rounding techniques yield novel and powerful ways
of rounding. As an application of these techniques, we present a linear-storage
Polynomial Time Approximation Scheme (PTAS) and a Fully Polynomial Time
Approximation Scheme (FPTAS) that compute an approximate solution, of any fixed
accuracy, in linear time. This linear complexity bound gives a substantial
improvement of the best previously known polynomial bounds.Comment: 19 LaTeX page
- …