11,125 research outputs found

    On Colorful Bin Packing Games

    Full text link
    We consider colorful bin packing games in which selfish players control a set of items which are to be packed into a minimum number of unit capacity bins. Each item has one of m≥2m\geq 2 colors and cannot be packed next to an item of the same color. All bins have the same unitary cost which is shared among the items it contains, so that players are interested in selecting a bin of minimum shared cost. We adopt two standard cost sharing functions: the egalitarian cost function which equally shares the cost of a bin among the items it contains, and the proportional cost function which shares the cost of a bin among the items it contains proportionally to their sizes. Although, under both cost functions, colorful bin packing games do not converge in general to a (pure) Nash equilibrium, we show that Nash equilibria are guaranteed to exist and we design an algorithm for computing a Nash equilibrium whose running time is polynomial under the egalitarian cost function and pseudo-polynomial for a constant number of colors under the proportional one. We also provide a complete characterization of the efficiency of Nash equilibria under both cost functions for general games, by showing that the prices of anarchy and stability are unbounded when m≥3m\geq 3 while they are equal to 3 for black and white games, where m=2m=2. We finally focus on games with uniform sizes (i.e., all items have the same size) for which the two cost functions coincide. We show again a tight characterization of the efficiency of Nash equilibria and design an algorithm which returns Nash equilibria with best achievable performance

    Learning with Symmetric Label Noise: The Importance of Being Unhinged

    Full text link
    Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2010] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2010] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong l2 regularisation makes most standard learners SLN-robust. Experiments confirm the SLN-robustness of the unhinged loss

    Fast rates in statistical and online learning

    Get PDF
    The speed with which a learning algorithm converges as it is presented with more data is a central problem in machine learning --- a fast rate of convergence means less data is needed for the same level of performance. The pursuit of fast rates in online and statistical learning has led to the discovery of many conditions in learning theory under which fast learning is possible. We show that most of these conditions are special cases of a single, unifying condition, that comes in two forms: the central condition for 'proper' learning algorithms that always output a hypothesis in the given model, and stochastic mixability for online algorithms that may make predictions outside of the model. We show that under surprisingly weak assumptions both conditions are, in a certain sense, equivalent. The central condition has a re-interpretation in terms of convexity of a set of pseudoprobabilities, linking it to density estimation under misspecification. For bounded losses, we show how the central condition enables a direct proof of fast rates and we prove its equivalence to the Bernstein condition, itself a generalization of the Tsybakov margin condition, both of which have played a central role in obtaining fast rates in statistical learning. Yet, while the Bernstein condition is two-sided, the central condition is one-sided, making it more suitable to deal with unbounded losses. In its stochastic mixability form, our condition generalizes both a stochastic exp-concavity condition identified by Juditsky, Rigollet and Tsybakov and Vovk's notion of mixability. Our unifying conditions thus provide a substantial step towards a characterization of fast rates in statistical learning, similar to how classical mixability characterizes constant regret in the sequential prediction with expert advice setting.Comment: 69 pages, 3 figure
    • …
    corecore