135 research outputs found
Decentralized Exploration in Multi-Armed Bandits
We consider the decentralized exploration problem: a set of players
collaborate to identify the best arm by asynchronously interacting with the
same stochastic environment. The objective is to insure privacy in the best arm
identification problem between asynchronous, collaborative, and thrifty
players. In the context of a digital service, we advocate that this
decentralized approach allows a good balance between the interests of users and
those of service providers: the providers optimize their services, while
protecting the privacy of the users and saving resources. We define the privacy
level as the amount of information an adversary could infer by intercepting the
messages concerning a single user. We provide a generic algorithm Decentralized
Elimination, which uses any best arm identification algorithm as a subroutine.
We prove that this algorithm insures privacy, with a low communication cost,
and that in comparison to the lower bound of the best arm identification
problem, its sample complexity suffers from a penalty depending on the inverse
of the probability of the most frequent players. Then, thanks to the genericity
of the approach, we extend the proposed algorithm to the non-stationary
bandits. Finally, experiments illustrate and complete the analysis
Budgeted Reinforcement Learning in Continuous State Space
A Budgeted Markov Decision Process (BMDP) is an extension of a Markov
Decision Process to critical applications requiring safety constraints. It
relies on a notion of risk implemented in the shape of a cost signal
constrained to lie below an - adjustable - threshold. So far, BMDPs could only
be solved in the case of finite state spaces with known dynamics. This work
extends the state-of-the-art to continuous spaces environments and unknown
dynamics. We show that the solution to a BMDP is a fixed point of a novel
Budgeted Bellman Optimality operator. This observation allows us to introduce
natural extensions of Deep Reinforcement Learning algorithms to address
large-scale BMDPs. We validate our approach on two simulated applications:
spoken dialogue and autonomous driving.Comment: N. Carrara and E. Leurent have equally contribute
Reinforcement Learning Framework for Deep Brain Stimulation Study
Malfunctioning neurons in the brain sometimes operate synchronously,
reportedly causing many neurological diseases, e.g. Parkinson's. Suppression
and control of this collective synchronous activity are therefore of great
importance for neuroscience, and can only rely on limited engineering trials
due to the need to experiment with live human brains. We present the first
Reinforcement Learning gym framework that emulates this collective behavior of
neurons and allows us to find suppression parameters for the environment of
synthetic degenerate models of neurons. We successfully suppress synchrony via
RL for three pathological signaling regimes, characterize the framework's
stability to noise, and further remove the unwanted oscillations by engaging
multiple PPO agents.Comment: 7 pages + 1 references, 7 figures. arXiv admin note: text overlap
with arXiv:1909.1215
- …