21 research outputs found

    Calibrated Surrogate Losses for Classification with Label-Dependent Costs

    Full text link
    We present surrogate regret bounds for arbitrary surrogate losses in the context of binary classification with label-dependent costs. Such bounds relate a classifier's risk, assessed with respect to a surrogate loss, to its cost-sensitive classification risk. Two approaches to surrogate regret bounds are developed. The first is a direct generalization of Bartlett et al. [2006], who focus on margin-based losses and cost-insensitive classification, while the second adopts the framework of Steinwart [2007] based on calibration functions. Nontrivial surrogate regret bounds are shown to exist precisely when the surrogate loss satisfies a "calibration" condition that is easily verified for many common losses. We apply this theory to the class of uneven margin losses, and characterize when these losses are properly calibrated. The uneven hinge, squared error, exponential, and sigmoid losses are then treated in detail.Comment: 33 pages, 7 figure

    No-Regret Online Prediction with Strategic Experts

    Full text link
    We study a generalization of the online binary prediction with expert advice framework where at each round, the learner is allowed to pick m≥1m\geq 1 experts from a pool of KK experts and the overall utility is a modular or submodular function of the chosen experts. We focus on the setting in which experts act strategically and aim to maximize their influence on the algorithm's predictions by potentially misreporting their beliefs about the events. Among others, this setting finds applications in forecasting competitions where the learner seeks not only to make predictions by aggregating different forecasters but also to rank them according to their relative performance. Our goal is to design algorithms that satisfy the following two requirements: 1) Incentive-compatible\textit{Incentive-compatible}: Incentivize the experts to report their beliefs truthfully, and 2) No-regret\textit{No-regret}: Achieve sublinear regret with respect to the true beliefs of the best fixed set of mm experts in hindsight. Prior works have studied this framework when m=1m=1 and provided incentive-compatible no-regret algorithms for the problem. We first show that a simple reduction of our problem to the m=1m=1 setting is neither efficient nor effective. Then, we provide algorithms that utilize the specific structure of the utility functions to achieve the two desired goals
    corecore