2,380 research outputs found
Dependent randomized rounding for clustering and partition systems with knapsack constraints
Clustering problems are fundamental to unsupervised learning. There is an
increased emphasis on fairness in machine learning and AI; one representative
notion of fairness is that no single demographic group should be
over-represented among the cluster-centers. This, and much more general
clustering problems, can be formulated with "knapsack" and "partition"
constraints. We develop new randomized algorithms targeting such problems, and
study two in particular: multi-knapsack median and multi-knapsack center. Our
rounding algorithms give new approximation and pseudo-approximation algorithms
for these problems. One key technical tool, which may be of independent
interest, is a new tail bound analogous to Feige (2006) for sums of random
variables with unbounded variances. Such bounds are very useful in inferring
properties of large networks using few samples
Recommended from our members
Zero-one IP problems: Polyhedral descriptions & cutting plane procedures
A systematic way for tightening an IP formulation is by employing classes of linear inequalities that define facets of the convex hull of the feasible integer points of the respective problems. Describing as well as identifying these inequalities will help in the efficiency of the LP-based cutting plane methods. In this report, we review classes of inequalities that partially described zero-one poly topes such as the 0-1 knapsack polytope, the set packing polytope and the travelling salesman polytope. Facets or valid inequalities derived from the 0-1 knapsack and the set packing polytopes are algorithmically identifie
A Lagrangian relaxation approach to the edge-weighted clique problem
The -clique polytope is the convex hull of the node and edge incidence vectors of all subcliques of size at most of a complete graph on nodes. Including the Boolean quadric polytope as a special case and being closely related to the quadratic knapsack polytope, it has received considerable attention in the literature. In particular, the max-cut problem is equivalent with optimizing a linear function over . The problem of optimizing linear functions over has so far been approached via heuristic combinatorial algorithms and cutting-plane methods. We study the structure of in further detail and present a new computational approach to the linear optimization problem based on Lucena's suggestion of integrating cutting planes into a Lagrangian relaxation of an integer programming problem. In particular, we show that the separation problem for tree inequalities becomes polynomial in our Lagrangian framework. Finally, computational results are presented. \u
Sparse grid quadrature on products of spheres
We examine sparse grid quadrature on weighted tensor products (WTP) of
reproducing kernel Hilbert spaces on products of the unit sphere, in the case
of worst case quadrature error for rules with arbitrary quadrature weights. We
describe a dimension adaptive quadrature algorithm based on an algorithm of
Hegland (2003), and also formulate a version of Wasilkowski and Wozniakowski's
WTP algorithm (1999), here called the WW algorithm. We prove that the dimension
adaptive algorithm is optimal in the sense of Dantzig (1957) and therefore no
greater in cost than the WW algorithm. Both algorithms therefore have the
optimal asymptotic rate of convergence given by Theorem 3 of Wasilkowski and
Wozniakowski (1999). A numerical example shows that, even though the asymptotic
convergence rate is optimal, if the dimension weights decay slowly enough, and
the dimensionality of the problem is large enough, the initial convergence of
the dimension adaptive algorithm can be slow.Comment: 34 pages, 6 figures. Accepted 7 January 2015 for publication in
Numerical Algorithms. Revised at page proof stage to (1) update email
address; (2) correct the accent on "Wozniakowski" on p. 7; (3) update
reference 2; (4) correct references 3, 18 and 2
- …