192 research outputs found
Recommended from our members
Polymorphic Aβ42 fibrils adopt similar secondary structure but differ in cross-strand side chain stacking interactions within the same β-sheet.
Formation of polymorphic amyloid fibrils is a common feature in neurodegenerative diseases involving protein aggregation. In Alzheimer's disease, different fibril structures may be associated with different clinical sub-types. Structural basis of fibril polymorphism is thus important for understanding the role of amyloid fibrils in the pathogenesis and progression of these diseases. Here we studied two types of Aβ42 fibrils prepared under quiescent and agitated conditions. Quiescent Aβ42 fibrils adopt a long and twisted morphology, while agitated fibrils are short and straight, forming large bundles via lateral association. EPR studies of these two types of Aβ42 fibrils show that the secondary structure is similar in both fibril polymorphs. At the same time, agitated Aβ42 fibrils show stronger interactions between spin labels across the full range of the Aβ42 sequence, suggesting a more tightly packed structure. Our data suggest that cross-strand side chain packing interactions within the same β-sheet may play a critical role in the formation of polymorphic fibrils
Metamagnetic transitions and anomalous magnetoresistance in EuAgAs single crystal
In this paper, the magnetic and transport properties were systematically
studied for EuAgAs single crystals, crystallizing in a centrosymmetric
trigonal CaCuP type structure. It was confirmed that two magnetic
transitions occur at = 10 K and = 15 K,
respectively. With the increasing field, the two transitions are noticeably
driven to lower temperature. At low temperatures, applying a magnetic field in
the plane induces two successive metamagnetic transitions. For
both and
, EuAgAs shows a positive, unexpected large
magnetoresistance (up to 202\%) at low fields below 10 K, and a large negative
magnetoresistance (up to -78\%) at high fields/intermediate temperatures. Such
anomalous field dependence of magnetoresistance may have potential application
in the future magnetic sensors. Finally, the magnetic phase diagrams of
EuAgAs were constructed for both
and
Reference Matters: Benchmarking Factual Error Correction for Dialogue Summarization with Fine-grained Evaluation Framework
Factuality is important to dialogue summarization. Factual error correction
(FEC) of model-generated summaries is one way to improve factuality. Current
FEC evaluation that relies on factuality metrics is not reliable and detailed
enough. To address this problem, we are the first to manually annotate a FEC
dataset for dialogue summarization containing 4000 items and propose FERRANTI,
a fine-grained evaluation framework based on reference correction that
automatically evaluates the performance of FEC models on different error
categories. Using this evaluation framework, we conduct sufficient experiments
with FEC approaches under a variety of settings and find the best training
modes and significant differences in the performance of the existing approaches
on different factual error categories.Comment: Accepted to ACL 2023 Main Conferenc
How Well Do Large Language Models Understand Syntax? An Evaluation by Asking Natural Language Questions
While recent advancements in large language models (LLMs) bring us closer to
achieving artificial general intelligence, the question persists: Do LLMs truly
understand language, or do they merely mimic comprehension through pattern
recognition? This study seeks to explore this question through the lens of
syntax, a crucial component of sentence comprehension. Adopting a natural
language question-answering (Q&A) scheme, we craft questions targeting nine
syntactic knowledge points that are most closely related to sentence
comprehension. Experiments conducted on 24 LLMs suggest that most have a
limited grasp of syntactic knowledge, exhibiting notable discrepancies across
different syntactic knowledge points. In particular, questions involving
prepositional phrase attachment pose the greatest challenge, whereas those
concerning adjectival modifier and indirect object are relatively easier for
LLMs to handle. Furthermore, a case study on the training dynamics of the LLMs
reveals that the majority of syntactic knowledge is learned during the initial
stages of training, hinting that simply increasing the number of training
tokens may not be the `silver bullet' for improving the comprehension ability
of LLMs.Comment: 20 pages, 6 figure
Mining Density Contrast Subgraphs
Dense subgraph discovery is a key primitive in many graph mining
applications, such as detecting communities in social networks and mining gene
correlation from biological data. Most studies on dense subgraph mining only
deal with one graph. However, in many applications, we have more than one graph
describing relations among a same group of entities. In this paper, given two
graphs sharing the same set of vertices, we investigate the problem of
detecting subgraphs that contrast the most with respect to density. We call
such subgraphs Density Contrast Subgraphs, or DCS in short. Two widely used
graph density measures, average degree and graph affinity, are considered. For
both density measures, mining DCS is equivalent to mining the densest subgraph
from a "difference" graph, which may have both positive and negative edge
weights. Due to the existence of negative edge weights, existing dense subgraph
detection algorithms cannot identify the subgraph we need. We prove the
computational hardness of mining DCS under the two graph density measures and
develop efficient algorithms to find DCS. We also conduct extensive experiments
on several real-world datasets to evaluate our algorithms. The experimental
results show that our algorithms are both effective and efficient.Comment: Full version of an ICDE'18 pape
Distantly-Supervised Named Entity Recognition with Adaptive Teacher Learning and Fine-grained Student Ensemble
Distantly-Supervised Named Entity Recognition (DS-NER) effectively alleviates
the data scarcity problem in NER by automatically generating training samples.
Unfortunately, the distant supervision may induce noisy labels, thus
undermining the robustness of the learned models and restricting the practical
application. To relieve this problem, recent works adopt self-training
teacher-student frameworks to gradually refine the training labels and improve
the generalization ability of NER models. However, we argue that the
performance of the current self-training frameworks for DS-NER is severely
underestimated by their plain designs, including both inadequate student
learning and coarse-grained teacher updating. Therefore, in this paper, we make
the first attempt to alleviate these issues by proposing: (1) adaptive teacher
learning comprised of joint training of two teacher-student networks and
considering both consistent and inconsistent predictions between two teachers,
thus promoting comprehensive student learning. (2) fine-grained student
ensemble that updates each fragment of the teacher model with a temporal moving
average of the corresponding fragment of the student, which enhances consistent
predictions on each model fragment against noise. To verify the effectiveness
of our proposed method, we conduct experiments on four DS-NER datasets. The
experimental results demonstrate that our method significantly surpasses
previous SOTA methods.Comment: Accepted at AAAI 202
- …