5,531 research outputs found

    Strong Decays of the Orbitally Excited Scalar D0D^{*}_{0} Mesons

    Full text link
    We calculate the two-body strong decays of the orbitally excited scalar mesons D0(2400)D_0^*(2400) and DJ(3000)D_J^*(3000) by using the relativistic Bethe-Salpeter (BS) method. DJ(3000)D_J^*(3000) was observed recently by the LHCb Collaboration, the quantum number of which has not been determined yet. In this paper, we assume that it is the 0+(2P)0^+(2P) state and obtain the transition amplitude by using the PCAC relation, low-energy theorem and effective Lagrangian method. For the 1P1P state, the total widths of D0(2400)0D_0^*(2400)^{0} and D0(2400)+ D_0^*(2400)^+ are 226 MeV and 246 MeV, respectively. With the assumption of 0+(2P)0^+(2P) state, the widths of DJ(3000)0D_J^*(3000)^0 and DJ(3000)+D_J^*(3000)^+ are both about 131 MeV, which is close to the present experimental data. Therefore, DJ(3000)D_J^*(3000) is a strong candidate for the 23P02^3P_0 state.Comment: 21 pages, 10 figure

    Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning

    Full text link
    Pre-trained language models have contributed significantly to relation extraction by demonstrating remarkable few-shot learning abilities. However, prompt tuning methods for relation extraction may still fail to generalize to those rare or hard patterns. Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test. Those long-tailed or hard patterns can hardly be memorized in parameters given few-shot instances. To this end, we regard RE as an open-book examination and propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction. We construct an open-book datastore for retrieval regarding prompt-based instance representations and corresponding relation labels as memorized key-value pairs. During inference, the model can infer relations by linearly interpolating the base output of PLM with the non-parametric nearest neighbor distribution over the datastore. In this way, our model not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open-book datastore. Extensive experiments on benchmark datasets show that our method can achieve state-of-the-art in both standard supervised and few-shot settings. Code are available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.Comment: Accepted by SIGIR 2022, short pape

    Analysis of the Comprehensive Influence of Science and Technology of Cereals, Oils and Foods in the Past 10 Years

    Get PDF
    Through the research on academic, social and industrial influences of Science and technology of cereals, oils and foods, focusing on the situation of journal manuscripts, academic influence indicators, new media indicators and industrial influence indicators, this papre systematically analyzed 17 items including the structure and number of papers, downloads and citations, publishing timeliness, WeChat and official website operation, authors and distribution groups, domestic and foreign media attention, database inclusion and awards, et al, which were collected from the past 11 years and new media since its establishment. The practical effects of the journal's reform measures from multiple dimensions since 2019 have been comprehensively verified and objectively evaluated, which grasped the development trend of the journal.These analysis could provide a data reference for adjustment the focus of journal work, optimization reform measures, improvement the quality of journals, expansion academic influence and enhancement the status of disciplines

    Review of the Development of Science and Technology of Cereals, Oils and Foods in the Past 30 Years

    Get PDF
    Cultural inheritance is the spiritual genealogy and life source to sustainable development for a journal. This paper took Science and Technology of Cereals, Oils and Foods (in brief “this journal”) as an example to review. The period of this journal has been divided into four stages, including the gestation period from 1991 to 1992, the early and long-term from1993 to 1996, the mature period from 1997 to 2018, and the new development period from 2019 to the present, in order to review its important performance, growth and progress in the changes of social times and the reform of Chinese scientific research system over more than 30 years. By sorting out the development history and the operating experience, the phased achievements and substantial progress in terms of operating characteristics were summarized. The historical contribution, hard work and harvest have also condensed. It could lay an important cornerstone for the journal to continue to carry forward the fine tradition and promote the construction of brand culture. This paper could also promote the journal development to a new level, and provide reference for peer journals to sort out their historical developments

    Subtle Roles of Sb and S in Regulating the Thermoelectric Properties of Nâ Type PbTe to High Performance

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/138238/1/aenm201700099.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/138238/2/aenm201700099-sup-0001-S1.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/138238/3/aenm201700099_am.pd

    LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting

    Full text link
    Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data. Existing dominant approaches usually suffer from the challenge that the target domain has different label sets compared with a resource-rich source domain, which can be concluded as class transfer and domain transfer. In this paper, we propose a lightweight tuning paradigm for low-resource NER via pluggable prompting (LightNER). Specifically, we construct the unified learnable verbalizer of entity categories to generate the entity span sequence and entity categories without any label-specific classifiers, thus addressing the class transfer issue. We further propose a pluggable guidance module by incorporating learnable parameters into the self-attention layer as guidance, which can re-modulate the attention and adapt pre-trained weights. Note that we only tune those inserted module with the whole parameter of the pre-trained language model fixed, thus, making our approach lightweight and flexible for low-resource scenarios and can better transfer knowledge across domains. Experimental results show that LightNER can obtain comparable performance in the standard supervised setting and outperform strong baselines in low-resource settings. Code is in https://github.com/zjunlp/DeepKE/tree/main/example/ner/few-shot.Comment: Accepted by COLING 202
    corecore