712 research outputs found

    Approximating Bin Packing within O(log OPT * log log OPT) bins

    Full text link
    For bin packing, the input consists of n items with sizes s_1,...,s_n in [0,1] which have to be assigned to a minimum number of bins of size 1. The seminal Karmarkar-Karp algorithm from '82 produces a solution with at most OPT + O(log^2 OPT) bins. We provide the first improvement in now 3 decades and show that one can find a solution of cost OPT + O(log OPT * log log OPT) in polynomial time. This is achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation using the Entropy Method from discrepancy theory. The result is constructive via algorithms of Bansal and Lovett-Meka

    Adjustable Robust Reinforcement Learning for Online 3D Bin Packing

    Full text link
    Designing effective policies for the online 3D bin packing problem (3D-BPP) has been a long-standing challenge, primarily due to the unpredictable nature of incoming box sequences and stringent physical constraints. While current deep reinforcement learning (DRL) methods for online 3D-BPP have shown promising results in optimizing average performance over an underlying box sequence distribution, they often fail in real-world settings where some worst-case scenarios can materialize. Standard robust DRL algorithms tend to overly prioritize optimizing the worst-case performance at the expense of performance under normal problem instance distribution. To address these issues, we first introduce a permutation-based attacker to investigate the practical robustness of both DRL-based and heuristic methods proposed for solving online 3D-BPP. Then, we propose an adjustable robust reinforcement learning (AR2L) framework that allows efficient adjustment of robustness weights to achieve the desired balance of the policy's performance in average and worst-case environments. Specifically, we formulate the objective function as a weighted sum of expected and worst-case returns, and derive the lower performance bound by relating to the return under a mixture dynamics. To realize this lower bound, we adopt an iterative procedure that searches for the associated mixture dynamics and improves the corresponding policy. We integrate this procedure into two popular robust adversarial algorithms to develop the exact and approximate AR2L algorithms. Experiments demonstrate that AR2L is versatile in the sense that it improves policy robustness while maintaining an acceptable level of performance for the nominal case.Comment: Accepted to NeurIPS202

    Beck's Three Permutations Conjecture: A Counterexample and Some Consequences

    Full text link

    Minimal proper non-IRUP instances of the one-dimensional Cutting Stock Problem

    Full text link
    We consider the well-known one dimensional cutting stock problem (1CSP). Based on the pattern structure of the classical ILP formulation of Gilmore and Gomory, we can decompose the infinite set of 1CSP instances, with a fixed demand n, into a finite number of equivalence classes. We show up a strong relation to weighted simple games. Studying the integer round-up property we computationally show that all 1CSP instances with n≀9n\le 9 are proper IRUP, while we give examples of a proper non-IRUP instances with n=10n=10. A gap larger than 1 occurs for n=11n=11. The worst known gap is raised from 1.003 to 1.0625. The used algorithmic approaches are based on exhaustive enumeration and integer linear programming. Additionally we give some theoretical bounds showing that all 1CSP instances with some specific parameters have the proper IRUP.Comment: 14 pages, 2 figures, 2 table

    High-throughput machine learning algorithms

    Get PDF
    The field of machine learning has become strongly compute driven, such that emerging research and applications require larger amounts of specialised hardware or smarter algorithms to advance beyond the state-of-the-art. This thesis develops specialised techniques and algorithms for a subset of computationally difficult machine learning problems. The applications under investigation are quantile approximation in the limited-memory data streaming setting, interpretability of decision tree ensembles, efficient sampling methods in the space of permutations, and the generation of large numbers of pseudorandom permutations. These specific applications are investigated as they represent significant bottlenecks in real-world machine learning pipelines, where improvements to throughput have significant impact on the outcomes of machine learning projects in both industry and research. To address these bottlenecks, we discuss both theoretical improvements, such as improved convergence rates, and hardware/software related improvements, such as optimised algorithm design for high throughput hardware accelerators. Some contributions include: the evaluation of bin-packing methods for efficiently scheduling small batches of dependent computations to GPU hardware execution units, numerically stable reduction operators for higher-order statistical moments, and memory bandwidth optimisation for GPU shuffling. Additionally, we apply theory of the symmetric group of permutations in reproducing kernel Hilbert spaces, resulting in improved analysis of Monte Carlo methods for Shapley value estimation and new, computationally more efficient algorithms based on kernel herding and Bayesian quadrature. We also utilise reproducing kernels over permutations to develop a novel statistical test for the hypothesis that a sample of permutations is drawn from a uniform distribution. The techniques discussed lie at the intersection of machine learning, high-performance computing, and applied mathematics. Much of the above work resulted in open source software used in real applications, including the GPUTreeShap library [38], shuffling primitives for the Thrust parallel computing library [2], extensions to the Shap package [31], and extensions to the XGBoost library [6]

    Semidefinite optimization in discrepancy theory

    Get PDF
    Recently, there have been several new developments in discrepancy theory based on connections to semidefinite programming. This connection has been useful in several ways. It gives efficient polynomial time algorithms for several problems for which only non-constructive results were previously known. It also leads to several new structural results in discrepancy itself, such as tightness of the so-called determinant lower bound, improved bounds on the discrepancy of the union of set systems and so on. We will give a brief survey of these results, focussing on the main ideas and the techniques involved

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-HĂŒbner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro PezzĂ©, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
    • 

    corecore