17,315 research outputs found
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
The occurrence of sleep passed through the evolutionary sieve and is
widespread in animal species. Sleep is known to be beneficial to cognitive and
mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the
importance of the phenomenon, a complete understanding of its functions and
underlying mechanisms is still lacking. In this paper, we show interesting
effects of deep-sleep-like slow oscillation activity on a simplified
thalamo-cortical model which is trained to encode, retrieve and classify images
of handwritten digits. During slow oscillations,
spike-timing-dependent-plasticity (STDP) produces a differential homeostatic
process. It is characterized by both a specific unsupervised enhancement of
connections among groups of neurons associated to instances of the same class
(digit) and a simultaneous down-regulation of stronger synapses created by the
training. This hierarchical organization of post-sleep internal representations
favours higher performances in retrieval and classification tasks. The
mechanism is based on the interaction between top-down cortico-thalamic
predictions and bottom-up thalamo-cortical projections during deep-sleep-like
slow oscillations. Indeed, when learned patterns are replayed during sleep,
cortico-thalamo-cortical connections favour the activation of other neurons
coding for similar thalamic inputs, promoting their association. Such mechanism
hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific
Reports journa
AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
This paper introduces a video dataset of spatio-temporally localized Atomic
Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual
actions in 430 15-minute video clips, where actions are localized in space and
time, resulting in 1.58M action labels with multiple labels per person
occurring frequently. The key characteristics of our dataset are: (1) the
definition of atomic visual actions, rather than composite actions; (2) precise
spatio-temporal annotations with possibly multiple annotations for each person;
(3) exhaustive annotation of these atomic actions over 15-minute video clips;
(4) people temporally linked across consecutive segments; and (5) using movies
to gather a varied set of action representations. This departs from existing
datasets for spatio-temporal action recognition, which typically provide sparse
annotations for composite actions in short video clips. We will release the
dataset publicly.
AVA, with its realistic scene and action complexity, exposes the intrinsic
difficulty of action recognition. To benchmark this, we present a novel
approach for action localization that builds upon the current state-of-the-art
methods, and demonstrates better performance on JHMDB and UCF101-24 categories.
While setting a new state of the art on existing datasets, the overall results
on AVA are low at 15.6% mAP, underscoring the need for developing new
approaches for video understanding.Comment: To appear in CVPR 2018. Check dataset page
https://research.google.com/ava/ for detail
- …