31 research outputs found

    Robust approachability and regret minimization in games with partial monitoring

    Get PDF
    Approachability has become a standard tool in analyzing earning algorithms in the adversarial online learning setup. We develop a variant of approachability for games where there is ambiguity in the obtained reward that belongs to a set, rather than being a single vector. Using this variant we tackle the problem of approachability in games with partial monitoring and develop simple and efficient algorithms (i.e., with constant per-step complexity) for this setup. We finally consider external regret and internal regret in repeated games with partial monitoring and derive regret-minimizing strategies based on approachability theory

    Robust approachability and regret minimization in games with partial monitoring

    Get PDF
    Approachability has become a standard tool in analyzing earning algorithms in the adversarial online learning setup. We develop a variant of approachability for games where there is ambiguity in the obtained reward that belongs to a set, rather than being a single vector. Using this variant we tackle the problem of approachability in games with partial monitoring and develop simple and efficient algorithms (i.e., with constant per-step complexity) for this setup. We finally consider external regret and internal regret in repeated games with partial monitoring and derive regret-minimizing strategies based on approachability theory

    An Approximate Dynamic Programming Approach to Repeated Games with Vector Losses

    Get PDF
    International audienceWe describe an approximate dynamic programming (ADP) approach to compute approximations of the optimal strategies and of the minimal losses that can be guaranteed in discounted repeated games with vector-valued losses. Among other applications, such vector-valued games prominently arise in the analysis of worst-case regret in repeated decision making in unknown environments, also known as the adversarial online learning framework. At the core of our approach is a characterization of the lower Pareto frontier of the set of expected losses that a player can guarantee in these games as the unique fixed point of a set-valued dynamic programming operator. When applied to the problem of worst-case regret minimization with discounted losses, our approach yields algorithms that achieve markedly improved performance bounds compared with off-the-shelf online learning algorithms like Hedge. These results thus suggest the significant potential of ADP-based approaches in adversarial online learning

    Approachability in Population Games

    Get PDF
    This paper reframes approachability theory within the context of population games. Thus, whilst one player aims at driving her average payoff to a predefined set, her opponent is not malevolent but rather extracted randomly from a population of individuals with given distribution on actions. First, convergence conditions are revisited based on the common prior on the population distribution, and we define the notion of \emph{1st-moment approachability}. Second, we develop a model of two coupled partial differential equations (PDEs) in the spirit of mean-field game theory: one describing the best-response of every player given the population distribution (this is a \emph{Hamilton-Jacobi-Bellman equation}), the other capturing the macroscopic evolution of average payoffs if every player plays its best response (this is an \emph{advection equation}). Third, we provide a detailed analysis of existence, nonuniqueness, and stability of equilibria (fixed points of the two PDEs). Fourth, we apply the model to regret-based dynamics, and use it to establish convergence to Bayesian equilibrium under incomplete information
    corecore