3 research outputs found
Shrewd Selection Speeds Surfing: Use Smart EXP3!
In this paper, we explore the use of multi-armed bandit online learning
techniques to solve distributed resource selection problems. As an example, we
focus on the problem of network selection. Mobile devices often have several
wireless networks at their disposal. While choosing the right network is vital
for good performance, a decentralized solution remains a challenge. The
impressive theoretical properties of multi-armed bandit algorithms, like EXP3,
suggest that it should work well for this type of problem. Yet, its real-word
performance lags far behind. The main reasons are the hidden cost of switching
networks and its slow rate of convergence. We propose Smart EXP3, a novel
bandit-style algorithm that (a) retains the good theoretical properties of
EXP3, (b) bounds the number of switches, and (c) yields significantly better
performance in practice. We evaluate Smart EXP3 using simulations, controlled
experiments, and real-world experiments. Results show that it stabilizes at the
optimal state, achieves fairness among devices and gracefully deals with
transient behaviors. In real world experiments, it can achieve 18% faster
download over alternate strategies. We conclude that multi-armed bandit
algorithms can play an important role in distributed resource selection
problems, when practical concerns, such as switching costs and convergence
time, are addressed.Comment: Full pape
Periodic Bandits and Wireless Network Selection
Bandit-style algorithms have been studied extensively in stochastic and adversarial settings. Such algorithms have been shown to be useful in multiplayer settings, e.g. to solve the wireless network selection problem, which can be formulated as an adversarial bandit problem. A leading bandit algorithm for the adversarial setting is EXP3. However, network behavior is often repetitive, where user density and network behavior follow regular patterns. Bandit algorithms, like EXP3, fail to provide good guarantees for periodic behaviors. A major reason is that these algorithms compete against fixed-action policies, which is ineffective in a periodic setting.
In this paper, we define a periodic bandit setting, and periodic regret as a better performance measure for this type of setting. Instead of comparing an algorithm\u27s performance to fixed-action policies, we aim to be competitive with policies that play arms under some set of possible periodic patterns F (for example, all possible periodic functions with periods 1,2,*s,P). We propose Periodic EXP4, a computationally efficient variant of the EXP4 algorithm for periodic settings. With K arms, T time steps, and where each periodic pattern in F is of length at most P, we show that the periodic regret obtained by Periodic EXP4 is at most O(sqrt{PKT log K + KT log |F|}). We also prove a lower bound of Omega (sqrt{PKT + KT {log |F|}/{log K}}) for the periodic setting, showing that this is optimal within log-factors. As an example, we focus on the wireless network selection problem. Through simulation, we show that Periodic EXP4 learns the periodic pattern over time, adapts to changes in a dynamic environment, and far outperforms EXP3