1,166 research outputs found

    No More Mind Games: Content Analysis of In-Game Commentary of the National Football League’s Concussion Problem

    Get PDF
    American (gridiron) football played at the professional level in the National Football League (NFL) is an inherently physical spectator sport, in which players frequently engage in significant contact to the head and upper body. Until recently, the long-term health consequences associated with on the field head trauma were not fully disclosed to players or the public, potentially misrepresenting the dangers involved in gameplay. Crucial to the dissemination of this information to the public are in-game televised commentators of NFL games, regarded as the primary conduits for mediating in-game narratives to the viewing audience. Using a social constructionist theoretical lens, this study aimed at identifying how Game Commentators represented in-game head trauma and concussions during NFL games for viewer consumption, through a content analysis of 102 randomly sampled regular season games, over the course of six seasons (2009-2014). Specifically, this research questioned the frequency and prevalence of significant contact, commentator representations of significant player contact, commentator representations of the players involved in significant contact and commentator communication of the severity of health hazards and consequences associated with significant contact. Observed during the content analysis were 226 individual incidents of significant contact. Findings indicate that commentator representations of significant contact did not appropriately convey the potential health consequences associated with head trauma and concussions to the viewing audience. Instead, incidents of significant contact were constructed by commentators as glorified instances of violence, physicality and masculinity- largely devoid and diffusive of the severity of health consequences associated with head injuries and concussions

    Physiological and Movement Demands of Rugby League Referees: Influence on Penalty Accuracy.

    Get PDF
    Research into the physiological and movement demands of Rugby League (RL) referees is limited, with only one study in the European Super League (SL). To date, no studies have considered decision-making in RL referees. The purpose of this study was to quantify penalty accuracy scores of RL referees and determine the relationship between penalty accuracy and total distance covered (TD), high-intensity running (HIR) and heart rate per 10-min period of match-play. Time motion analysis was undertaken on 8 referees over 148 European SL games during the 2012 season using 10Hz GPS analysis and heart rate monitors. The number and timing of penalties awarded was quantified using Opta Stats. Referees awarded the correct decision on 74 ± 5% of occasions. Lowest accuracy was observed in the last 10-minute period of the game (67 ± 13%), with a moderate drop (ES= 0.86) in accuracy observed between 60-70 minutes and 70-80 minutes. Despite this, there were only small correlations observed between HRmean, total distance, HIR efforts and penalty accuracy. Although a moderate correlation was observed between maximum velocity and accuracy. Despite only small correlations observed, it would be rash to assume that physiological and movement demands of refereeing have no influence on decision-making. More likely, other confounding variables influence referee decision-making accuracy, requiring further investigation. Findings can be used by referees and coaches to inform training protocols, ensuring training is specific to both cognitive and physical match demands

    The Comparative Psychology of Artificial Intelligences

    Get PDF
    The last five years have seen a series of remarkable achievements in Artificial Intelligence (AI) research. For example, systems based on Deep Neural Networks (DNNs) can now classify natural images as well or better than humans, defeat human grandmasters in strategy games as complex as chess, Go, or Starcraft II, and navigate autonomous vehicles across thousands of miles of mixed terrain. I here examine three ways in which DNNs are alleged to fall short of human intelligence: that their training is too data-hungry, that they are vulnerable to adversarial examples, and that their processing is not interpretable. I argue that these criticisms are subject to comparative bias, which must be overcome for comparisons of DNNs and humans to be meaningful. I suggest that AI would benefit here by learning from more mature methodological debates in comparative psychology concerning how to conduct fair comparisons between different kinds of intelligences

    The Comparative Psychology of Artificial Intelligences

    Get PDF
    The last five years have seen a series of remarkable achievements in Artificial Intelligence (AI) research. For example, systems based on Deep Neural Networks (DNNs) can now classify natural images as well or better than humans, defeat human grandmasters in strategy games as complex as chess, Go, or Starcraft II, and navigate autonomous vehicles across thousands of miles of mixed terrain. I here examine three ways in which DNNs are alleged to fall short of human intelligence: that their training is too data-hungry, that they are vulnerable to adversarial examples, and that their processing is not interpretable. I argue that these criticisms are subject to comparative bias, which must be overcome for comparisons of DNNs and humans to be meaningful. I suggest that AI would benefit here by learning from more mature methodological debates in comparative psychology concerning how to conduct fair comparisons between different kinds of intelligences

    Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

    Full text link
    Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks can often be hierarchically decomposed into sub-tasks. A step in the Q-function can be associated with solving a sub-task, where the expectation of the return increases. RUDDER has been introduced to identify these steps and then redistribute reward to them, thus immediately giving reward if sub-tasks are solved. Since the problem of delayed rewards is mitigated, learning is considerably sped up. However, for complex tasks, current exploration strategies as deployed in RUDDER struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Typically the number of demonstrations is small and RUDDER's LSTM model as a deep learning method does not learn well. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER's safe exploration and lessons replay buffer. Second, we replace RUDDER's LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations as known from bioinformatics. Align-RUDDER inherits the concept of reward redistribution, which considerably reduces the delay of rewards, thus speeding up learning. Align-RUDDER outperforms competitors on complex artificial tasks with delayed reward and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. Github: https://github.com/ml-jku/align-rudder, YouTube: https://youtu.be/HO-_8ZUl-U
    • …
    corecore