17,274 research outputs found
Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition
Motion representation plays a vital role in human action recognition in
videos. In this study, we introduce a novel compact motion representation for
video action recognition, named Optical Flow guided Feature (OFF), which
enables the network to distill temporal information through a fast and robust
approach. The OFF is derived from the definition of optical flow and is
orthogonal to the optical flow. The derivation also provides theoretical
support for using the difference between two frames. By directly calculating
pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be
embedded in any existing CNN based video action recognition framework with only
a slight additional cost. It enables the CNN to extract spatiotemporal
information, especially the temporal information between frames simultaneously.
This simple but powerful idea is validated by experimental results. The network
with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3% on
UCF-101, which is comparable with the result obtained by two streams (RGB and
optical flow), but is 15 times faster in speed. Experimental results also show
that OFF is complementary to other motion modalities such as optical flow. When
the proposed method is plugged into the state-of-the-art video action
recognition framework, it has 96:0% and 74:2% accuracy on UCF-101 and HMDB-51
respectively. The code for this project is available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Feature.Comment: CVPR 2018. code available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Featur
Learning and innovative elements of strategy adoption rules expand cooperative network topologies
Cooperation plays a key role in the evolution of complex systems. However,
the level of cooperation extensively varies with the topology of agent networks
in the widely used models of repeated games. Here we show that cooperation
remains rather stable by applying the reinforcement learning strategy adoption
rule, Q-learning on a variety of random, regular, small-word, scale-free and
modular network models in repeated, multi-agent Prisoners Dilemma and Hawk-Dove
games. Furthermore, we found that using the above model systems other long-term
learning strategy adoption rules also promote cooperation, while introducing a
low level of noise (as a model of innovation) to the strategy adoption rules
makes the level of cooperation less dependent on the actual network topology.
Our results demonstrate that long-term learning and random elements in the
strategy adoption rules, when acting together, extend the range of network
topologies enabling the development of cooperation at a wider range of costs
and temptations. These results suggest that a balanced duo of learning and
innovation may help to preserve cooperation during the re-organization of
real-world networks, and may play a prominent role in the evolution of
self-organizing, complex systems.Comment: 14 pages, 3 Figures + a Supplementary Material with 25 pages, 3
Tables, 12 Figures and 116 reference
Internal representations, external representations and ergonomics: towards a theoretical integration
Solving the binding problem: cellular adhesive molecules and their control of the cortical quantum entangled network
Quantum entanglement is shown to be the only acceptable physical solution to the binding problem. The biological basis of interneuronal entanglement is described in the frames of the beta-neurexin-neuroligin model developed by Georgiev (2002) and is proposed novel mechanism for control of the neurons that are temporarily entangled to produce every single conscious moment experienced as present. The model provides psychiatrists with ‘deeper’ understanding of the functioning of the psyche in normal and pathologic conditions
- …