10,988 research outputs found
Prediction of Stable Ground-State Lithium Polyhydrides under High Pressures
Hydrogen-rich compounds are important for understanding the dissociation of
dense molecular hydrogen, as well as searching for room temperature
Bardeen-Cooper-Schrieffer (BCS) superconductors. A recent high pressure
experiment reported the successful synthesis of novel insulating lithium
polyhydrides when above 130 GPa. However, the results are in sharp contrast to
previous theoretical prediction by PBE functional that around this pressure
range all lithium polyhydrides (LiHn (n = 2-8)) should be metallic. In order to
address this discrepancy, we perform unbiased structure search with first
principles calculation by including the van der Waals interaction that was
ignored in previous prediction to predict the high pressure stable structures
of LiHn (n = 2-11, 13) up to 200 GPa. We reproduce the previously predicted
structures, and further find novel compositions that adopt more stable
structures. The van der Waals functional (vdW-DF) significantly alters the
relative stability of lithium polyhydrides, and predicts that the stable
stoichiometries for the ground-state should be LiH2 and LiH9 at 130-170 GPa,
and LiH2, LiH8 and LiH10 at 180-200 GPa. Accurate electronic structure
calculation with GW approximation indicates that LiH, LiH2, LiH7, and LiH9 are
insulative up to at least 208 GPa, and all other lithium polyhydrides are
metallic. The calculated vibron frequencies of these insulating phases are also
in accordance with the experimental infrared (IR) data. This reconciliation
with the experimental observation suggests that LiH2, LiH7, and LiH9 are the
possible candidates for lithium polyhydrides synthesized in that experiment.
Our results reinstate the credibility of density functional theory in
description H-rich compounds, and demonstrate the importance of considering van
der Waals interaction in this class of materials.Comment: 34 pages, 15 figure
Improving Person Re-identification by Attribute and Identity Learning
Person re-identification (re-ID) and attribute recognition share a common
target at learning pedestrian descriptions. Their difference consists in the
granularity. Most existing re-ID methods only take identity labels of
pedestrians into consideration. However, we find the attributes, containing
detailed local descriptions, are beneficial in allowing the re-ID model to
learn more discriminative feature representations. In this paper, based on the
complementarity of attribute labels and ID labels, we propose an
attribute-person recognition (APR) network, a multi-task network which learns a
re-ID embedding and at the same time predicts pedestrian attributes. We
manually annotate attribute labels for two large-scale re-ID datasets, and
systematically investigate how person re-ID and attribute recognition benefit
from each other. In addition, we re-weight the attribute predictions
considering the dependencies and correlations among the attributes. The
experimental results on two large-scale re-ID benchmarks demonstrate that by
learning a more discriminative representation, APR achieves competitive re-ID
performance compared with the state-of-the-art methods. We use APR to speed up
the retrieval process by ten times with a minor accuracy drop of 2.92% on
Market-1501. Besides, we also apply APR on the attribute recognition task and
demonstrate improvement over the baselines.Comment: Accepted to Pattern Recognition (PR
Dual Long Short-Term Memory Networks for Sub-Character Representation Learning
Characters have commonly been regarded as the minimal processing unit in
Natural Language Processing (NLP). But many non-latin languages have
hieroglyphic writing systems, involving a big alphabet with thousands or
millions of characters. Each character is composed of even smaller parts, which
are often ignored by the previous work. In this paper, we propose a novel
architecture employing two stacked Long Short-Term Memory Networks (LSTMs) to
learn sub-character level representation and capture deeper level of semantic
meanings. To build a concrete study and substantiate the efficiency of our
neural architecture, we take Chinese Word Segmentation as a research case
example. Among those languages, Chinese is a typical case, for which every
character contains several components called radicals. Our networks employ a
shared radical level embedding to solve both Simplified and Traditional Chinese
Word Segmentation, without extra Traditional to Simplified Chinese conversion,
in such a highly end-to-end way the word segmentation can be significantly
simplified compared to the previous work. Radical level embeddings can also
capture deeper semantic meaning below character level and improve the system
performance of learning. By tying radical and character embeddings together,
the parameter count is reduced whereas semantic knowledge is shared and
transferred between two levels, boosting the performance largely. On 3 out of 4
Bakeoff 2005 datasets, our method surpassed state-of-the-art results by up to
0.4%. Our results are reproducible, source codes and corpora are available on
GitHub.Comment: Accepted & forthcoming at ITNG-201
- …