205 research outputs found
Calibrated Fairness in Bandits
We study fairness within the stochastic, \emph{multi-armed bandit} (MAB)
decision making framework. We adapt the fairness framework of "treating similar
individuals similarly" to this setting. Here, an `individual' corresponds to an
arm and two arms are `similar' if they have a similar quality distribution.
First, we adopt a {\em smoothness constraint} that if two arms have a similar
quality distribution then the probability of selecting each arm should be
similar. In addition, we define the {\em fairness regret}, which corresponds to
the degree to which an algorithm is not calibrated, where perfect calibration
requires that the probability of selecting an arm is equal to the probability
with which the arm has the best quality realization. We show that a variation
on Thompson sampling satisfies smooth fairness for total variation distance,
and give an bound on fairness regret. This complements
prior work, which protects an on-average better arm from being less favored. We
also explain how to extend our algorithm to the dueling bandit setting.Comment: To be presented at the FAT-ML'17 worksho
Efficient methods for near-optimal sequential decision making under uncertainty
This chapter discusses decision making under uncertainty. More specifically, it offers an overview of efficient Bayesian and distribution-free algorithms for making near-optimal sequential decisions under uncertainty about the environment. Due to the uncertainty, such algorithms must not only learn from their interaction with the environment but also perform as well as possible while learning is taking place. © 2010 Springer-Verlag Berlin Heidelberg
DUCT: An upper confidence bound approach to distributed constraint optimization problems
The Upper Confidence Bounds (UCB) algorithm is a well-known near-optimal strategy for the stochastic multi-armed bandit problem. Its extensions to trees, such as the Upper Confidence Tree (UCT) algorithm, have resulted in good solutions to the problem of Go. This paper introduces DUCT, a distributed algorithm inspired by UCT, for solving Distributed Constraint Optimization Problems (DCOP). Bounds on the solution quality are provided, and experiments show that, compared to existing DCOP approaches, DUCT is able to solve very large problems much more efficiently, or to find significantly higher quality solutions. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved
The Reinforcement Learning Competition 2014
Reinforcement learning is one of the most general problems in artificial intelligence. It has been used to model problems in automated experiment design, control, economics, game playing, scheduling and telecommunications. The aim of the reinforcement learning competition is to encourage the development of very general learning agents for arbitrary reinforcement learning problems and to provide a test-bed for the unbiased evaluation of algorithms
The Contribution of Three-Dimensional Power Doppler Imaging in the Preoperative Assessment of Breast Tumors: A Preliminary Report
Purpose. The aim of this study was to determine the value of 3D and 3D Power Doppler sonography in the detection of tumor malignancy in breast lesions and to find new diagnostic criteria for differential diagnosis.
Methods. One hundred and twenty five women with clinically or mammographically suspicious findings were referred for 3D Power Doppler ultrasound prior to surgery. Histological diagnosis was conducted after surgery and compared with ultrasound findings. Sonographic criteria used for breast cancer diagnosis were based on a system that included morphological characteristics and criteria of the vascular pattern of a breast mass by Power Doppler imaging.
Results. Seventy-two lesions were histopathologically diagnosed as benign and 53 tumors as malignant. Three-dimensional ultrasound identified 49 out of 53 histologically confirmed breast cancers resulting in a sensitivity of 92.4% and a specificity of 86.1% in diagnosing breast malignancy (PPV: 0.83, NPV:0.94).
Conclusions. 3D ultrasonography is a valuable tool in identifying preoperatively the possibility of a tumor to be malignant
Fast Reinforcement Learning with Large Action Sets Using Error-Correcting Output Codes for MDP Factorization
International audienceThe use of Reinforcement Learning in real-world scenarios is strongly limited by issues of scale. Most RL learning algorithms are unable to deal with problems composed of hundreds or sometimes even dozens of possible actions, and therefore cannot be applied to many real-world problems. We consider the RL problem in the supervised classification framework where the optimal policy is obtained through a multiclass classifier, the set of classes being the set of actions of the problem. We introduce error-correcting output codes (ECOCs) in this setting and propose two new methods for reducing complexity when using rollouts-based approaches. The first method consists in using an ECOC-based classifier as the multiclass classifier, reducing the learning complexity from O(A2) to O(Alog(A)) . We then propose a novel method that profits from the ECOC's coding dictionary to split the initial MDP into O(log(A)) separate two-action MDPs. This second method reduces learning complexity even further, from O(A2) to O(log(A)) , thus rendering problems with large action sets tractable. We finish by experimentally demonstrating the advantages of our approach on a set of benchmark problems, both in speed and performance
- …