87 research outputs found
Recommended from our members
Role of initial magnetic disorder: A time-dependent ab initio study of ultrafast demagnetization mechanisms.
Despite more than 20 years of development, the underlying physics of the laser-induced demagnetization process is still debated. We present a fast, real-time time-dependent density functional theory (rt-TDDFT) algorithm together with the phenomenological atomic Landau-Lifshitz-Gilbert model to investigate this problem. Our Hamiltonian considers noncollinear magnetic moment, spin-orbit coupling (SOC), electron-electron, electron-phonon, and electron-light interactions. The algorithm for time evolution achieves hundreds of times of speedup enabling calculation of large systems. Our simulations yield a demagnetization rate similar to experiments. We found that (i) the angular momentum flow from light to the system is not essential and the spin Zeeman effect is negligible. (ii) The phonon can play a role but is not essential. (iii) The initial spin disorder and the self-consistent update of the electron-electron interaction play dominant roles and enhance the demagnetization to the experimentally observed rate. The spin disorder connects the electronic structure theory with the phenomenological three-temperature model
Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition
Motion representation plays a vital role in human action recognition in
videos. In this study, we introduce a novel compact motion representation for
video action recognition, named Optical Flow guided Feature (OFF), which
enables the network to distill temporal information through a fast and robust
approach. The OFF is derived from the definition of optical flow and is
orthogonal to the optical flow. The derivation also provides theoretical
support for using the difference between two frames. By directly calculating
pixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be
embedded in any existing CNN based video action recognition framework with only
a slight additional cost. It enables the CNN to extract spatiotemporal
information, especially the temporal information between frames simultaneously.
This simple but powerful idea is validated by experimental results. The network
with OFF fed only by RGB inputs achieves a competitive accuracy of 93.3% on
UCF-101, which is comparable with the result obtained by two streams (RGB and
optical flow), but is 15 times faster in speed. Experimental results also show
that OFF is complementary to other motion modalities such as optical flow. When
the proposed method is plugged into the state-of-the-art video action
recognition framework, it has 96:0% and 74:2% accuracy on UCF-101 and HMDB-51
respectively. The code for this project is available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Feature.Comment: CVPR 2018. code available at
https://github.com/kevin-ssy/Optical-Flow-Guided-Featur
- …