194 research outputs found

    Metamagnetic transitions and anomalous magnetoresistance in EuAg4_4As2_2 single crystal

    Full text link
    In this paper, the magnetic and transport properties were systematically studied for EuAg4_4As2_2 single crystals, crystallizing in a centrosymmetric trigonal CaCu4_4P2_2 type structure. It was confirmed that two magnetic transitions occur at T\textit{T}N1_{N1} = 10 K and T\textit{T}N2_{N2} = 15 K, respectively. With the increasing field, the two transitions are noticeably driven to lower temperature. At low temperatures, applying a magnetic field in the ab\textit{ab} plane induces two successive metamagnetic transitions. For both H\textit{H} ∥\parallel ab\textit{ab} and H\textit{H} ∥\parallel c\textit{c}, EuAg4_4As2_2 shows a positive, unexpected large magnetoresistance (up to 202\%) at low fields below 10 K, and a large negative magnetoresistance (up to -78\%) at high fields/intermediate temperatures. Such anomalous field dependence of magnetoresistance may have potential application in the future magnetic sensors. Finally, the magnetic phase diagrams of EuAg4_{4}As2_{2} were constructed for both H\textit{H} ∥\parallel ab\textit{ab} and H\textit{H} ∥\parallel c\textit{c}

    High-order Joint Constituency and Dependency Parsing

    Full text link
    This work revisits the topic of jointly parsing constituency and dependency trees, i.e., to produce compatible constituency and dependency trees simultaneously for input sentences, which is attractive considering that the two types of trees are complementary in representing syntax. The original work of Zhou and Zhao (2019) performs joint parsing only at the inference phase. They train two separate parsers under the multi-task learning framework (i.e., one shared encoder and two independent decoders). They design an ad-hoc dynamic programming-based decoding algorithm of O(n5)O(n^5) time complexity for finding optimal compatible tree pairs. Compared to their work, we make progress in three aspects: (1) adopting a much more efficient decoding algorithm of O(n4)O(n^4) time complexity, (2) exploring joint modeling at the training phase, instead of only at the inference phase, (3) proposing high-order scoring components to promote constituent-dependency interaction. We conduct experiments and analysis on seven languages, covering both rich-resource and low-resource scenarios. Results and analysis show that joint modeling leads to a modest overall performance boost over separate modeling, but substantially improves the complete matching ratio of whole trees, thanks to the explicit modeling of tree compatibility.Comment: LREC-COLING 202

    Reference Matters: Benchmarking Factual Error Correction for Dialogue Summarization with Fine-grained Evaluation Framework

    Full text link
    Factuality is important to dialogue summarization. Factual error correction (FEC) of model-generated summaries is one way to improve factuality. Current FEC evaluation that relies on factuality metrics is not reliable and detailed enough. To address this problem, we are the first to manually annotate a FEC dataset for dialogue summarization containing 4000 items and propose FERRANTI, a fine-grained evaluation framework based on reference correction that automatically evaluates the performance of FEC models on different error categories. Using this evaluation framework, we conduct sufficient experiments with FEC approaches under a variety of settings and find the best training modes and significant differences in the performance of the existing approaches on different factual error categories.Comment: Accepted to ACL 2023 Main Conferenc

    How Well Do Large Language Models Understand Syntax? An Evaluation by Asking Natural Language Questions

    Full text link
    While recent advancements in large language models (LLMs) bring us closer to achieving artificial general intelligence, the question persists: Do LLMs truly understand language, or do they merely mimic comprehension through pattern recognition? This study seeks to explore this question through the lens of syntax, a crucial component of sentence comprehension. Adopting a natural language question-answering (Q&A) scheme, we craft questions targeting nine syntactic knowledge points that are most closely related to sentence comprehension. Experiments conducted on 24 LLMs suggest that most have a limited grasp of syntactic knowledge, exhibiting notable discrepancies across different syntactic knowledge points. In particular, questions involving prepositional phrase attachment pose the greatest challenge, whereas those concerning adjectival modifier and indirect object are relatively easier for LLMs to handle. Furthermore, a case study on the training dynamics of the LLMs reveals that the majority of syntactic knowledge is learned during the initial stages of training, hinting that simply increasing the number of training tokens may not be the `silver bullet' for improving the comprehension ability of LLMs.Comment: 20 pages, 6 figure

    Mining Density Contrast Subgraphs

    Full text link
    Dense subgraph discovery is a key primitive in many graph mining applications, such as detecting communities in social networks and mining gene correlation from biological data. Most studies on dense subgraph mining only deal with one graph. However, in many applications, we have more than one graph describing relations among a same group of entities. In this paper, given two graphs sharing the same set of vertices, we investigate the problem of detecting subgraphs that contrast the most with respect to density. We call such subgraphs Density Contrast Subgraphs, or DCS in short. Two widely used graph density measures, average degree and graph affinity, are considered. For both density measures, mining DCS is equivalent to mining the densest subgraph from a "difference" graph, which may have both positive and negative edge weights. Due to the existence of negative edge weights, existing dense subgraph detection algorithms cannot identify the subgraph we need. We prove the computational hardness of mining DCS under the two graph density measures and develop efficient algorithms to find DCS. We also conduct extensive experiments on several real-world datasets to evaluate our algorithms. The experimental results show that our algorithms are both effective and efficient.Comment: Full version of an ICDE'18 pape
    • …
    corecore