11 research outputs found
Towards Optimal Algorithms For Online Decision Making Under Practical Constraints
Artificial Intelligence is increasingly being used in real-life applications such as driving with autonomous cars; deliveries with autonomous drones; customer support with chat-bots; personal assistant with smart speakers . . . An Artificial Intelligent agent (AI) can be trained to become expert at a task through a system of rewards and punishment, also well known as Reinforcement Learning (RL). However, since the AI will deal with human beings, it also has to follow some moral rules to accomplish any task. For example, the AI should be fair to the other agents and not destroy the environment. Moreover, the AI should not leak the privacy of users’ data it processes. Those rules represent significant challenges in designing AI that we tackle in this thesis through mathematically rigorous solutions.More precisely, we start by considering the basic RL problem modeled as a discrete Markov Decision Process. We propose three simple algorithms (UCRL-V, BUCRL and TSUCRL) using two different paradigms: Frequentist (UCRL-V) and Bayesian (BUCRL and TSUCRL). Through a unified theoretical analysis, we show that our three algorithms are near-optimal. Experiments performed confirm the superiority of our methods compared to existing techniques. Afterwards, we address the issue of fairness in the stateless version of reinforcement learning also known as multi-armed bandit. To concentrate our effort on the key challenges, we focus on two-agents multi-armed bandit. We propose a novel objective that has been shown to be connected to fairness and justice. We derive an algorithm UCRG to solve this novel objective and show theoretically its near-optimality. Next, we tackle the issue of privacy by using the recently introduced notion of Differential Privacy. We design multi-armed bandit algorithms that preserve differential-privacy. Theoretical analyses show that for the same level of privacy, our newly developed algorithms achieve better performance than existing techniques
First-Order Regret Analysis of Thompson Sampling
We address online combinatorial optimization when the player has a prior over
the adversary's sequence of losses. In this framework, Russo and Van Roy
proposed an information-theoretic analysis of Thompson Sampling based on the
{\em information ratio}, resulting in optimal worst-case regret bounds. In this
paper we introduce three novel ideas to this line of work. First we propose a
new quantity, the scale-sensitive information ratio, which allows us to obtain
more refined first-order regret bounds (i.e., bounds of the form
where is the loss of the best combinatorial action). Second we replace
the entropy over combinatorial actions by a coordinate entropy, which allows us
to obtain the first optimal worst-case bound for Thompson Sampling in the
combinatorial setting. Finally, we introduce a novel link between Bayesian
agents and frequentist confidence intervals. Combining these ideas we show that
the classical multi-armed bandit first-order regret bound still holds true in the more challenging and more general semi-bandit
scenario. This latter result improves the previous state of the art bound
by Lykouris, Sridharan and Tardos.Comment: 27 page