134 research outputs found
Decentralized Restless Bandit with Multiple Players and Unknown Dynamics
We consider decentralized restless multi-armed bandit problems with unknown
dynamics and multiple players. The reward state of each arm transits according
to an unknown Markovian rule when it is played and evolves according to an
arbitrary unknown random process when it is passive. Players activating the
same arm at the same time collide and suffer from reward loss. The objective is
to maximize the long-term reward by designing a decentralized arm selection
policy to address unknown reward models and collisions among players. A
decentralized policy is constructed that achieves a regret with logarithmic
order when an arbitrary nontrivial bound on certain system parameters is known.
When no knowledge about the system is available, we extend the policy to
achieve a regret arbitrarily close to the logarithmic order. The result finds
applications in communication networks, financial investment, and industrial
engineering.Comment: 7 pages, 2 figures, in Proc. of Information Theory and Applications
Workshop (ITA), January, 201
Learning in A Changing World: Restless Multi-Armed Bandit with Unknown Dynamics
We consider the restless multi-armed bandit (RMAB) problem with unknown
dynamics in which a player chooses M out of N arms to play at each time. The
reward state of each arm transits according to an unknown Markovian rule when
it is played and evolves according to an arbitrary unknown random process when
it is passive. The performance of an arm selection policy is measured by
regret, defined as the reward loss with respect to the case where the player
knows which M arms are the most rewarding and always plays the M best arms. We
construct a policy with an interleaving exploration and exploitation epoch
structure that achieves a regret with logarithmic order when arbitrary (but
nontrivial) bounds on certain system parameters are known. When no knowledge
about the system is available, we show that the proposed policy achieves a
regret arbitrarily close to the logarithmic order. We further extend the
problem to a decentralized setting where multiple distributed players share the
arms without information exchange. Under both an exogenous restless model and
an endogenous restless model, we show that a decentralized extension of the
proposed policy preserves the logarithmic regret order as in the centralized
setting. The results apply to adaptive learning in various dynamic systems and
communication networks, as well as financial investment.Comment: 33 pages, 5 figures, submitted to IEEE Transactions on Information
Theory, 201
Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems
In the Multi-Armed Bandit (MAB) problem, there is a given set of arms with
unknown reward models. At each time, a player selects one arm to play, aiming
to maximize the total expected reward over a horizon of length T. An approach
based on a Deterministic Sequencing of Exploration and Exploitation (DSEE) is
developed for constructing sequential arm selection policies. It is shown that
for all light-tailed reward distributions, DSEE achieves the optimal
logarithmic order of the regret, where regret is defined as the total expected
reward loss against the ideal case with known reward models. For heavy-tailed
reward distributions, DSEE achieves O(T^1/p) regret when the moments of the
reward distributions exist up to the pth order for 1<p<=2 and O(T^1/(1+p/2))
for p>2. With the knowledge of an upperbound on a finite moment of the
heavy-tailed reward distributions, DSEE offers the optimal logarithmic regret
order. The proposed DSEE approach complements existing work on MAB by providing
corresponding results for general reward distributions. Furthermore, with a
clearly defined tunable parameter-the cardinality of the exploration sequence,
the DSEE approach is easily extendable to variations of MAB, including MAB with
various objectives, decentralized MAB with multiple players and incomplete
reward observations under collisions, MAB with unknown Markov dynamics, and
combinatorial MAB with dependent arms that often arise in network optimization
problems such as the shortest path, the minimum spanning, and the dominating
set problems under unknown random weights.Comment: 22 pages, 2 figure
Decentralized Learning for Multi-player Multi-armed Bandits
We consider the problem of distributed online learning with multiple players
in multi-armed bandits (MAB) models. Each player can pick among multiple arms.
When a player picks an arm, it gets a reward. We consider both i.i.d. reward
model and Markovian reward model. In the i.i.d. model each arm is modelled as
an i.i.d. process with an unknown distribution with an unknown mean. In the
Markovian model, each arm is modelled as a finite, irreducible, aperiodic and
reversible Markov chain with an unknown probability transition matrix and
stationary distribution. The arms give different rewards to different players.
If two players pick the same arm, there is a "collision", and neither of them
get any reward. There is no dedicated control channel for coordination or
communication among the players. Any other communication between the users is
costly and will add to the regret. We propose an online index-based distributed
learning policy called algorithm that trades off
\textit{exploration v. exploitation} in the right way, and achieves expected
regret that grows at most as near-. The motivation comes from
opportunistic spectrum access by multiple secondary users in cognitive radio
networks wherein they must pick among various wireless channels that look
different to different users. This is the first distributed learning algorithm
for multi-player MABs to the best of our knowledge.Comment: 33 pages, 3 figures. Submitted to IEEE Transactions on Information
Theor
Joint Channel Selection and Power Control in Infrastructureless Wireless Networks: A Multi-Player Multi-Armed Bandit Framework
This paper deals with the problem of efficient resource allocation in dynamic
infrastructureless wireless networks. Assuming a reactive interference-limited
scenario, each transmitter is allowed to select one frequency channel (from a
common pool) together with a power level at each transmission trial; hence, for
all transmitters, not only the fading gain, but also the number of interfering
transmissions and their transmit powers are varying over time. Due to the
absence of a central controller and time-varying network characteristics, it is
highly inefficient for transmitters to acquire global channel and network
knowledge. Therefore a reasonable assumption is that transmitters have no
knowledge of fading gains, interference, and network topology. Each
transmitting node selfishly aims at maximizing its average reward (or
minimizing its average cost), which is a function of the action of that
specific transmitter as well as those of all other transmitters. This scenario
is modeled as a multi-player multi-armed adversarial bandit game, in which
multiple players receive an a priori unknown reward with an arbitrarily
time-varying distribution by sequentially pulling an arm, selected from a known
and finite set of arms. Since players do not know the arm with the highest
average reward in advance, they attempt to minimize their so-called regret,
determined by the set of players' actions, while attempting to achieve
equilibrium in some sense. To this end, we design in this paper two joint power
level and channel selection strategies. We prove that the gap between the
average reward achieved by our approaches and that based on the best fixed
strategy converges to zero asymptotically. Moreover, the empirical joint
frequencies of the game converge to the set of correlated equilibria. We
further characterize this set for two special cases of our designed game
Channel Selection for Network-assisted D2D Communication via No-Regret Bandit Learning with Calibrated Forecasting
We consider the distributed channel selection problem in the context of
device-to-device (D2D) communication as an underlay to a cellular network.
Underlaid D2D users communicate directly by utilizing the cellular spectrum but
their decisions are not governed by any centralized controller. Selfish D2D
users that compete for access to the resources construct a distributed system,
where the transmission performance depends on channel availability and quality.
This information, however, is difficult to acquire. Moreover, the adverse
effects of D2D users on cellular transmissions should be minimized. In order to
overcome these limitations, we propose a network-assisted distributed channel
selection approach in which D2D users are only allowed to use vacant cellular
channels. This scenario is modeled as a multi-player multi-armed bandit game
with side information, for which a distributed algorithmic solution is
proposed. The solution is a combination of no-regret learning and calibrated
forecasting, and can be applied to a broad class of multi-player stochastic
learning problems, in addition to the formulated channel selection problem.
Analytically, it is established that this approach not only yields vanishing
regret (in comparison to the global optimal solution), but also guarantees that
the empirical joint frequencies of the game converge to the set of correlated
equilibria.Comment: 31 pages (one column), 9 figure
Extended UCB Policy for Multi-Armed Bandit with Light-Tailed Reward Distributions
We consider the multi-armed bandit problems in which a player aims to accrue
reward by sequentially playing a given set of arms with unknown reward
statistics. In the classic work, policies were proposed to achieve the optimal
logarithmic regret order for some special classes of light-tailed reward
distributions, e.g., Auer et al.'s UCB1 index policy for reward distributions
with finite support. In this paper, we extend Auer et al.'s UCB1 index policy
to achieve the optimal logarithmic regret order for all light-tailed (or
equivalently, locally sub-Gaussian) reward distributions defined by the (local)
existence of the moment-generating function.Comment: 9 pages, 1 figur
- …