13,069 research outputs found
Observation of vacancy-induced suppression of electronic cooling in defected graphene
Previous studies of electron-phonon interaction in impure graphene have found
that static disorder can give rise to an enhancement of electronic cooling. We
investigate the effect of dynamic disorder and observe over an order of
magnitude suppression of electronic cooling compared with clean graphene. The
effect is stronger in graphene with more vacancies, confirming its
vacancy-induced nature. The dependence of the coupling constant on the phonon
temperature implies its link to the dynamics of disorder. Our study highlights
the effect of disorder on electron-phonon interaction in graphene. In addition,
the suppression of electronic cooling holds great promise for improving the
performance of graphene-based bolometer and photo-detector devices.Comment: 13 pages, 4 figure
Dual Long Short-Term Memory Networks for Sub-Character Representation Learning
Characters have commonly been regarded as the minimal processing unit in
Natural Language Processing (NLP). But many non-latin languages have
hieroglyphic writing systems, involving a big alphabet with thousands or
millions of characters. Each character is composed of even smaller parts, which
are often ignored by the previous work. In this paper, we propose a novel
architecture employing two stacked Long Short-Term Memory Networks (LSTMs) to
learn sub-character level representation and capture deeper level of semantic
meanings. To build a concrete study and substantiate the efficiency of our
neural architecture, we take Chinese Word Segmentation as a research case
example. Among those languages, Chinese is a typical case, for which every
character contains several components called radicals. Our networks employ a
shared radical level embedding to solve both Simplified and Traditional Chinese
Word Segmentation, without extra Traditional to Simplified Chinese conversion,
in such a highly end-to-end way the word segmentation can be significantly
simplified compared to the previous work. Radical level embeddings can also
capture deeper semantic meaning below character level and improve the system
performance of learning. By tying radical and character embeddings together,
the parameter count is reduced whereas semantic knowledge is shared and
transferred between two levels, boosting the performance largely. On 3 out of 4
Bakeoff 2005 datasets, our method surpassed state-of-the-art results by up to
0.4%. Our results are reproducible, source codes and corpora are available on
GitHub.Comment: Accepted & forthcoming at ITNG-201
- …