4,441 research outputs found
Time-dependent Aharonov-Bohm effect on the noncommutative space
We study the time-dependent Aharonov-Bohm effect on the noncommutative space.
Because there is no net Aharonov-Bohm phase shift in the time-dependent case on
the commutative space, therefore, a tiny deviation from zero indicates new
physics. Based on the Seiberg-Witten map we obtain the gauge invariant and
Lorentz covariant Aharonov-Bohm phase shift in general case on noncommutative
space. We find there are two kinds of contribution: momentum-dependent and
momentum-independent corrections. For the momentum-dependent correction, there
is a cancellation between the magnetic and electric phase shifts, just like the
case on the commutative space. However, there is a non-trivial contribution in
the momentum-independent correction. This is true for both the time-independent
and time-dependent Aharonov-Bohm effects on the noncommutative space. However,
for the time-dependent Aharonov-Bohm effect, there is no overwhelming
background which exists in the time-independent Aharonov-Bohm effect on both
commutative and noncommutative space. Therefore, the time-dependent
Aharonov-Bohm can be sensitive to the spatial noncommutativity. \draftnote{The
net correction is proportional to the product of the magnetic fluxes through
the fundamental area represented by the noncommutative parameter , and
through the surface enclosed by the trajectory of charged particle.} More
interestingly, there is an anti-collinear relation between the logarithms of
the magnetic field and the averaged flux (N is the number of
fringes shifted). This nontrivial relation can also provide a way to test the
spatial noncommutativity. For , our estimation on the
experimental sensitivity shows that it can reach the scale. This
sensitivity can be enhanced by using stronger magnetic field strength, larger
magnetic flux, as well as higher experimental precision on the phase shift.Comment: 12 pages, 1 figure; v2, accepted version by PL
Multi-channel Encoder for Neural Machine Translation
Attention-based Encoder-Decoder has the effective architecture for neural
machine translation (NMT), which typically relies on recurrent neural networks
(RNN) to build the blocks that will be lately called by attentive reader during
the decoding process. This design of encoder yields relatively uniform
composition on source sentence, despite the gating mechanism employed in
encoding RNN. On the other hand, we often hope the decoder to take pieces of
source sentence at varying levels suiting its own linguistic structure: for
example, we may want to take the entity name in its raw form while taking an
idiom as a perfectly composed unit. Motivated by this demand, we propose
Multi-channel Encoder (MCE), which enhances encoding components with different
levels of composition. More specifically, in addition to the hidden state of
encoding RNN, MCE takes 1) the original word embedding for raw encoding with no
composition, and 2) a particular design of external memory in Neural Turing
Machine (NTM) for more complex composition, while all three encoding strategies
are properly blended during decoding. Empirical study on Chinese-English
translation shows that our model can improve by 6.52 BLEU points upon a strong
open source NMT system: DL4MT1. On the WMT14 English- French task, our single
shallow system achieves BLEU=38.8, comparable with the state-of-the-art deep
models.Comment: Accepted by AAAI-201
- …