29,447 research outputs found
Classifying motion states of AUV based on graph representation for multivariate time series
Acknowledgement This work is supported by Natural Science Foundation of Shandong Province (ZR2020MF079) and China Scholarship Council (CSC).Peer reviewedPostprin
Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features
Whilst contrastive learning yields powerful representations by matching
different augmented views of the same instance, it lacks the ability to capture
the similarities between different instances. One popular way to address this
limitation is by learning global features (after the global pooling) to capture
inter-instance relationships based on knowledge distillation, where the global
features of the teacher are used to guide the learning of the global features
of the student. Inspired by cross-modality learning, we extend this existing
framework that only learns from global features by encouraging the global
features and intermediate layer features to learn from each other. This leads
to our novel self-supervised framework: cross-context learning between global
and hypercolumn features (CGH), that enforces the consistency of instance
relations between low- and high-level semantics. Specifically, we stack the
intermediate feature maps to construct a hypercolumn representation so that we
can measure instance relations using two contexts (hypercolumn and global
feature) separately, and then use the relations of one context to guide the
learning of the other. This cross-context learning allows the model to learn
from the differences between the two contexts. The experimental results on
linear classification and downstream tasks show that our method outperforms the
state-of-the-art methods
Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features
Whilst contrastive learning yields powerful representations by matching different augmented views of the same instance, it lacks the ability to capture the similarities between different instances. One popular way to address this limitation is by learning global features (after the global pooling) to capture inter-instance relationships based on knowledge distillation, where the global features of the teacher are used to guide the learning of the global features of the student. Inspired by cross-modality learning, we extend this existing framework that only learns from global features by encouraging the global features and intermediate layer features to learn from each other. This leads to our novel self-supervised framework: cross-context learning between global and hypercolumn features (CGH), that enforces the consistency of instance relations between lowand high-level semantics. Specifically, we stack the intermediate feature maps to construct a “hypercolumn” representation so that we can measure instance relations using two contexts (hypercolumn and global feature) separately, and then use the relations of one context to guide the learning of the other. This cross-context learning allows the model to learn from the differences between the two contexts. The experimental results on linear classification and downstream tasks show that our method outperforms the state-of-the-art methods
Molecular Heat Engines: Quantum Coherence Effects
Recent developments in nanoscale experimental techniques made it possible to
utilize single molecule junctions as devices for electronics and energy
transfer with quantum coherence playing an important role in their
thermoelectric characteristics. Theoretical studies on the efficiency of
nanoscale devices usually employ rate (Pauli) equations, which do not account
for quantum coherence. Therefore, the question whether quantum coherence could
improve the efficiency of a molecular device cannot be fully addressed within
such considerations. Here, we employ a nonequilibrium Green function approach
to study the effects of quantum coherence and dephasing on the thermoelectric
performance of molecular heat engines. Within a generic bichromophoric
donor-bridge-acceptor junction model, we show that quantum coherence may
increase efficiency compared to quasi-classical (rate equation) predictions and
that pure dephasing and dissipation destroy this effect.Comment: 21 pages, 4 figure
- …