35 research outputs found
Experimental tests of the chiral anomaly magnetoresistance in the Dirac-Weyl semimetals NaBi and GdPtBi
In the Dirac/Weyl semimetal, the chiral anomaly appears as an "axial" current
arising from charge-pumping between the lowest (chiral) Landau levels of the
Weyl nodes, when an electric field is applied parallel to a magnetic field . Evidence for the chiral anomaly was obtained from the longitudinal
magnetoresistance (LMR) in NaBi and GdPtBi. However, current jetting
effects (focussing of the current density ) have raised general concerns
about LMR experiments. Here we implement a litmus test that allows the
intrinsic LMR in NaBi and GdPtBi to be sharply distinguished from pure
current jetting effects (in pure Bi). Current jetting enhances along the
mid-ridge (spine) of the sample while decreasing it at the edge. We measure the
distortion by comparing the local voltage drop at the spine (expressed as the
resistance ) with that at the edge (). In Bi,
sharply increases with but decreases (jetting effects are
dominant). However, in NaBi and GdPtBi, both and
decrease (jetting effects are subdominant). A numerical simulation allows the
jetting distortions to be removed entirely. We find that the intrinsic
longitudinal resistivity in NaBi decreases by a factor of
10.9 between = 0 and 10 T. A second litmus test is obtained from the
parametric plot of the planar angular magnetoresistance. These results
strenghthen considerably the evidence for the intrinsic nature of the
chiral-anomaly induced LMR. We briefly discuss how the squeeze test may be
extended to test ZrTe.Comment: 17 pages, 8 figures, new co-authors added, new Fig. 6a added. In
press, PR
Attribute Graph Clustering via Learnable Augmentation
Contrastive deep graph clustering (CDGC) utilizes contrastive learning to
group nodes into different clusters. Better augmentation techniques benefit the
quality of the contrastive samples, thus being one of key factors to improve
performance. However, the augmentation samples in existing methods are always
predefined by human experiences, and agnostic from the downstream task
clustering, thus leading to high human resource costs and poor performance. To
this end, we propose an Attribute Graph Clustering method via Learnable
Augmentation (\textbf{AGCLA}), which introduces learnable augmentors for
high-quality and suitable augmented samples for CDGC. Specifically, we design
two learnable augmentors for attribute and structure information, respectively.
Besides, two refinement matrices, including the high-confidence pseudo-label
matrix and the cross-view sample similarity matrix, are generated to improve
the reliability of the learned affinity matrix. During the training procedure,
we notice that there exist differences between the optimization goals for
training learnable augmentors and contrastive learning networks. In other
words, we should both guarantee the consistency of the embeddings as well as
the diversity of the augmented samples. Thus, an adversarial learning mechanism
is designed in our method. Moreover, a two-stage training strategy is leveraged
for the high-confidence refinement matrices. Extensive experimental results
demonstrate the effectiveness of AGCLA on six benchmark datasets
Dink-Net: Neural Clustering on Large Graphs
Deep graph clustering, which aims to group the nodes of a graph into disjoint
clusters with deep neural networks, has achieved promising progress in recent
years. However, the existing methods fail to scale to the large graph with
million nodes. To solve this problem, a scalable deep graph clustering method
(Dink-Net) is proposed with the idea of dilation and shrink. Firstly, by
discriminating nodes, whether being corrupted by augmentations, representations
are learned in a self-supervised manner. Meanwhile, the cluster centres are
initialized as learnable neural parameters. Subsequently, the clustering
distribution is optimized by minimizing the proposed cluster dilation loss and
cluster shrink loss in an adversarial manner. By these settings, we unify the
two-step clustering, i.e., representation learning and clustering optimization,
into an end-to-end framework, guiding the network to learn clustering-friendly
features. Besides, Dink-Net scales well to large graphs since the designed loss
functions adopt the mini-batch data to optimize the clustering distribution
even without performance drops. Both experimental results and theoretical
analyses demonstrate the superiority of our method. Compared to the runner-up,
Dink-Net achieves 9.62% NMI improvement on the ogbn-papers100M dataset with 111
million nodes and 1.6 billion edges. The source code is released at
https://github.com/yueliu1999/Dink-Net. Besides, a collection (papers, codes,
and datasets) of deep graph clustering is shared at
https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering.Comment: 19 pages, 5 figure
Hard Sample Aware Network for Contrastive Deep Graph Clustering
Contrastive deep graph clustering, which aims to divide nodes into disjoint
groups via contrastive mechanisms, is a challenging research spot. Among the
recent works, hard sample mining-based algorithms have achieved great attention
for their promising performance. However, we find that the existing hard sample
mining methods have two problems as follows. 1) In the hardness measurement,
the important structural information is overlooked for similarity calculation,
degrading the representativeness of the selected hard negative samples. 2)
Previous works merely focus on the hard negative sample pairs while neglecting
the hard positive sample pairs. Nevertheless, samples within the same cluster
but with low similarity should also be carefully learned. To solve the
problems, we propose a novel contrastive deep graph clustering method dubbed
Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity
measure criterion and a general dynamic sample weighing strategy. Concretely,
in our algorithm, the similarities between samples are calculated by
considering both the attribute embeddings and the structure embeddings, better
revealing sample relationships and assisting hardness measurement. Moreover,
under the guidance of the carefully collected high-confidence clustering
information, our proposed weight modulating function will first recognize the
positive and negative samples and then dynamically up-weight the hard sample
pairs while down-weighting the easy ones. In this way, our method can mine not
only the hard negative samples but also the hard positive sample, thus
improving the discriminative capability of the samples further. Extensive
experiments and analyses demonstrate the superiority and effectiveness of our
proposed method.Comment: 9 pages, 6 figure
Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment
Temporal graph learning aims to generate high-quality representations for
graph-based tasks along with dynamic information, which has recently drawn
increasing attention. Unlike the static graph, a temporal graph is usually
organized in the form of node interaction sequences over continuous time
instead of an adjacency matrix. Most temporal graph learning methods model
current interactions by combining historical information over time. However,
such methods merely consider the first-order temporal information while
ignoring the important high-order structural information, leading to
sub-optimal performance. To solve this issue, by extracting both temporal and
structural information to learn more informative node representations, we
propose a self-supervised method termed S2T for temporal graph learning. Note
that the first-order temporal information and the high-order structural
information are combined in different ways by the initial node representations
to calculate two conditional intensities, respectively. Then the alignment loss
is introduced to optimize the node representations to be more informative by
narrowing the gap between the two intensities. Concretely, besides modeling
temporal information using historical neighbor sequences, we further consider
the structural information from both local and global levels. At the local
level, we generate structural intensity by aggregating features from the
high-order neighbor sequences. At the global level, a global representation is
generated based on all nodes to adjust the structural intensity according to
the active statuses on different nodes. Extensive experiments demonstrate that
the proposed method S2T achieves at most 10.13% performance improvement
compared with the state-of-the-art competitors on several datasets
Reinforcement Graph Clustering with Unknown Cluster Number
Deep graph clustering, which aims to group nodes into disjoint clusters by
neural networks in an unsupervised manner, has attracted great attention in
recent years. Although the performance has been largely improved, the excellent
performance of the existing methods heavily relies on an accurately predefined
cluster number, which is not always available in the real-world scenario. To
enable the deep graph clustering algorithms to work without the guidance of the
predefined cluster number, we propose a new deep graph clustering method termed
Reinforcement Graph Clustering (RGC). In our proposed method, cluster number
determination and unsupervised representation learning are unified into a
uniform framework by the reinforcement learning mechanism. Concretely, the
discriminative node representations are first learned with the contrastive
pretext task. Then, to capture the clustering state accurately with both local
and global information in the graph, both node and cluster states are
considered. Subsequently, at each state, the qualities of different cluster
numbers are evaluated by the quality network, and the greedy action is executed
to determine the cluster number. In order to conduct feedback actions, the
clustering-oriented reward function is proposed to enhance the cohesion of the
same clusters and separate the different clusters. Extensive experiments
demonstrate the effectiveness and efficiency of our proposed method. The source
code of RGC is shared at https://github.com/yueliu1999/RGC and a collection
(papers, codes and, datasets) of deep graph clustering is shared at
https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering on Github