11,676 research outputs found
First-Principles Simulations of Inelastic Electron Tunneling Spectroscopyof Molecular Junctions
A generalized Green's function theory is developed to simulate the inelastic
electron tunneling spectroscopy (IETS) of molecular junctions. It has been
applied to a realistic molecular junction with an octanedithiolate embedded
between two gold contacts in combination with the hybrid density functional
theory calculations. The calculated spectra are in excellent agreement with
recent experimental results. Strong temperature dependence of the experimental
IETS spectra is also reproduced. It is shown that the IETS is extremely
sensitive to the intra-molecular conformation and to the molecule-metal contact
geometry
Bi-collinear antiferromagnetic order in the tetragonal -FeTe
By the first-principles electronic structure calculations, we find that the
ground state of PbO-type tetragonal -FeTe is in a bi-collinear
antiferromagnetic state, in which the Fe local moments () are
ordered ferromagnetically along a diagonal direction and antiferromagnetically
along the other diagonal direction on the Fe square lattice. This bi-collinear
order results from the interplay among the nearest, next nearest, and next next
nearest neighbor superexchange interactions , , and , mediated
by Te -band. In contrast, the ground state of -FeSe is in the
collinear antiferromagnetic order, similar as in LaFeAsO and BaFeAs.Comment: 5 pages and 5 figure
In-Process Global Interpretation for Graph Learning via Distribution Matching
Graphs neural networks (GNNs) have emerged as a powerful graph learning model
due to their superior capacity in capturing critical graph patterns. To gain
insights about the model mechanism for interpretable graph learning, previous
efforts focus on post-hoc local interpretation by extracting the data pattern
that a pre-trained GNN model uses to make an individual prediction. However,
recent works show that post-hoc methods are highly sensitive to model
initialization and local interpretation can only explain the model prediction
specific to a particular instance. In this work, we address these limitations
by answering an important question that is not yet studied: how to provide
global interpretation of the model training procedure? We formulate this
problem as in-process global interpretation, which targets on distilling
high-level and human-intelligible patterns that dominate the training procedure
of GNNs. We further propose Graph Distribution Matching (GDM) to synthesize
interpretive graphs by matching the distribution of the original and
interpretive graphs in the feature space of the GNN as its training proceeds.
These few interpretive graphs demonstrate the most informative patterns the
model captures during training. Extensive experiments on graph classification
datasets demonstrate multiple advantages of the proposed method, including high
explanation accuracy, time efficiency and the ability to reveal class-relevant
structure.Comment: Under Revie
- …