15 research outputs found
Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation
A variety of machine learning models have been proposed to assess the
performance of players in professional sports. However, they have only a
limited ability to model how player performance depends on the game context.
This paper proposes a new approach to capturing game context: we apply Deep
Reinforcement Learning (DRL) to learn an action-value Q function from 3M
play-by-play events in the National Hockey League (NHL). The neural network
representation integrates both continuous context signals and game history,
using a possession-based LSTM. The learned Q-function is used to value players'
actions under different game contexts. To assess a player's overall
performance, we introduce a novel Game Impact Metric (GIM) that aggregates the
values of the player's actions. Empirical Evaluation shows GIM is consistent
throughout a play season, and correlates highly with standard success measures
and future salary.Comment: This paper has been accepted by IJCAI 201
Actions Speak Louder Than Goals: Valuing Player Actions in Soccer
Assessing the impact of the individual actions performed by soccer players
during games is a crucial aspect of the player recruitment process.
Unfortunately, most traditional metrics fall short in addressing this task as
they either focus on rare actions like shots and goals alone or fail to account
for the context in which the actions occurred. This paper introduces (1) a new
language for describing individual player actions on the pitch and (2) a
framework for valuing any type of player action based on its impact on the game
outcome while accounting for the context in which the action happened. By
aggregating soccer players' action values, their total offensive and defensive
contributions to their team can be quantified. We show how our approach
considers relevant contextual information that traditional player evaluation
metrics ignore and present a number of use cases related to scouting and
playing style characterization in the 2016/2017 and 2017/2018 seasons in
Europe's top competitions.Comment: Significant update of the paper. The same core idea, but with a
clearer methodology, applied on a different data set, and more extensive
experiments. 9 pages + 2 pages appendix. To be published at SIGKDD 201
Temporal consistency in learning action values for volleyball
Learning actions values is a key idea in sports analytics with applications such as player ranking, tactical insight and outcome prediction. We compare two fundamentally different approaches for learning action values on a novel play-by-play volleyball dataset. In the first approach, we employ regression models that implicitly assume statistical independence of data samples. In the second approach, we use a deep reinforcement learning model, explicitly enforcing the sequential nature of the data in the learning process. We find that temporally independent regression can in certain settings outperform the reinforcement learning approach in terms of predictive accuracy, but the latter performs much better when temporal consistency is required. We also consider a mimic regression tree as a way to add interpretability to the deep reinforcement learning approach. Finally, we examine the computed action values and perform a number of example analyses to verify their validity