9,533 research outputs found
Deep learning investigation for chess player attention prediction using eye-tracking and game data
This article reports on an investigation of the use of convolutional neural
networks to predict the visual attention of chess players. The visual attention
model described in this article has been created to generate saliency maps that
capture hierarchical and spatial features of chessboard, in order to predict
the probability fixation for individual pixels Using a skip-layer architecture
of an autoencoder, with a unified decoder, we are able to use multiscale
features to predict saliency of part of the board at different scales, showing
multiple relations between pieces. We have used scan path and fixation data
from players engaged in solving chess problems, to compute 6600 saliency maps
associated to the corresponding chess piece configurations. This corpus is
completed with synthetically generated data from actual games gathered from an
online chess platform. Experiments realized using both scan-paths from chess
players and the CAT2000 saliency dataset of natural images, highlights several
results. Deep features, pretrained on natural images, were found to be helpful
in training visual attention prediction for chess. The proposed neural network
architecture is able to generate meaningful saliency maps on unseen chess
configurations with good scores on standard metrics. This work provides a
baseline for future work on visual attention prediction in similar contexts
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
Emotion evoked by an advertisement plays a key role in influencing brand
recall and eventual consumer choices. Automatic ad affect recognition has
several useful applications. However, the use of content-based feature
representations does not give insights into how affect is modulated by aspects
such as the ad scene setting, salient object attributes and their interactions.
Neither do such approaches inform us on how humans prioritize visual
information for ad understanding. Our work addresses these lacunae by
decomposing video content into detected objects, coarse scene structure, object
statistics and actively attended objects identified via eye-gaze. We measure
the importance of each of these information channels by systematically
incorporating related information into ad affect prediction models. Contrary to
the popular notion that ad affect hinges on the narrative and the clever use of
linguistic and social cues, we find that actively attended objects and the
coarse scene structure better encode affective information as compared to
individual scene objects or conspicuous background elements.Comment: Accepted for publication in the Proceedings of 20th ACM International
Conference on Multimodal Interaction, Boulder, CO, US
CENP-F stabilizes kinetochore-microtubule attachments and limits dynein stripping of corona cargoes
Accurate chromosome segregation demands efficient capture of microtubules by kinetochores and their conversion to stable bioriented attachments that can congress and then segregate chromosomes. An early event is the shedding of the outermost fibrous corona layer of the kinetochore following microtubule attachment. Centromere protein F (CENP-F) is part of the corona, contains two microtubule-binding domains, and physically associates with dynein motor regulators. Here, we have combined CRISPR gene editing and engineered separation-of-function mutants to define how CENP-F contributes to kinetochore function. We show that the two microtubule-binding domains make distinct contributions to attachment stability and force transduction but are dispensable for chromosome congression. We further identify a specialized domain that functions to limit the dynein-mediated stripping of corona cargoes through a direct interaction with Nde1. This antagonistic activity is crucial for maintaining the required corona composition and ensuring efficient kinetochore biorientation
Individual differences in infant fixation duration relate to attention and behavioral control in childhood
Individual differences in fixation duration are considered a reliable measure of attentional control in adults. However, the degree to which individual differences in fixation duration in infancy (0–12 months) relate to temperament and behavior in childhood is largely unknown. In the present study, data were examined from 120 infants (mean age = 7.69 months, SD = 1.90) who previously participated in an eye-tracking study. At follow-up, parents completed age-appropriate questionnaires about their child’s temperament and behavior (mean age of children = 41.59 months, SD = 9.83). Mean fixation duration in infancy was positively associated with effortful control (β = 0.20, R2 = .02, p = .04) and negatively with surgency (β = −0.37, R2 = .07, p = .003) and hyperactivity-inattention (β = −0.35, R2 = .06, p = .005) in childhood. These findings suggest that individual differences in mean fixation duration in infancy are linked to attentional and behavioral control in childhood
Affective games:a multimodal classification system
Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation
- …