71 research outputs found
Attosecond Spectroscopy Probing Electron Correlation Dynamics
Electrons are the driving force behind every chemical reaction. The exchange, ionization, or even relaxation of electrons is behind every bond broken or formed. According to the Bohr model of the atom, it takes an electron 150 as to orbit a proton[6]. With this as a unit time scale for an electron, it is clear that a pulse duration of several femtoseconds will not be sufficient to understanding electron dynamics. Our work demonstrates both technical and scientific achievements that push the boundaries of attosecond dynamics. TDSE studies show that amplification the yield of high harmonic generation (HHG) may be possible with transverse confinement of the electron. XUV-pump-XUV-probe shows that the yield of APT train can be sufficient for 2-photon double ionization studies. A zero dead-time detection system allows for the measurement of state-resolved double ionization for the first time. Exploiting attosecond angular streaking[7] probes sequential and non-sequential double ionization via electron-electron correlations with attosecond time resolution. Finally, using recoil frame momentum correlation, the fast dissociation of CH3I reveals important orbital ionization dynamics of non-dissociative & dissociative, single & double ionization
PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Deep Neural Networks (DNNs) have revolutionized many aspects of our lives.
The use of DNNs is becoming ubiquitous including in softwares for image
recognition, speech recognition, speech synthesis, language translation, to
name a few. he training of DNN architectures however is computationally
expensive. Once the model is created, its use in the intended application - the
inference task, is computationally heavy too and the inference needs to be fast
for real time use. For obtaining high performance today, the code of Deep
Learning (DL) primitives optimized for specific architectures by expert
programmers exposed via libraries is the norm. However, given the constant
emergence of new DNN architectures, creating hand optimized code is expensive,
slow and is not scalable.
To address this performance-productivity challenge, in this paper we present
compiler algorithms to automatically generate high performance implementations
of DL primitives that closely match the performance of hand optimized
libraries. We develop novel data reuse analysis algorithms using the polyhedral
model to derive efficient execution schedules automatically. In addition,
because most DL primitives use some variant of matrix multiplication at their
core, we develop a flexible framework where it is possible to plug in library
implementations of the same in lieu of a subset of the loops. We show that such
a hybrid compiler plus a minimal library-use approach results in
state-of-the-art performance. We develop compiler algorithms to also perform
operator fusions that reduce data movement through the memory hierarchy of the
computer system.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0214
- …