We analyze the following group learning problem in the context of opinion
diffusion: Consider a network with M users, each facing N options. In a
discrete time setting, at each time step, each user chooses K out of the N
options, and receive randomly generated rewards, whose statistics depend on the
options chosen as well as the user itself, and are unknown to the users. Each
user aims to maximize their expected total rewards over a certain time horizon
through an online learning process, i.e., a sequence of exploration (sampling
the return of each option) and exploitation (selecting empirically good
options) steps.
Within this context we consider two group learning scenarios, (1) users with
uniform preferences and (2) users with diverse preferences, and examine how a
user should construct its learning process to best extract information from
other's decisions and experiences so as to maximize its own reward. Performance
is measured in {\em weak regret}, the difference between the user's total
reward and the reward from a user-specific best single-action policy (i.e.,
always selecting the set of options generating the highest mean rewards for
this user). Within each scenario we also consider two cases: (i) when users
exchange full information, meaning they share the actual rewards they obtained
from their choices, and (ii) when users exchange limited information, e.g.,
only their choices but not rewards obtained from these choices