4,990 research outputs found

    Charged charmoniumlike structures in the e+e−→ψ(3686)π+π−e^+ e^- \to \psi(3686) \pi^+ \pi^- process based on the ISPE mechanism

    Full text link
    In 2017, the BESIII Collaboration announced the observation of a charged charmonium-like structure in the ψ(3686)π±\psi(3686)\pi^\pm invariant mass spectrum of the e+e−→ψ(3686)π+π−e^+ e^- \to \psi(3686) \pi^+ \pi^- process at different energy points, which enables us to perform a precise study of this process based on the initial single pion emission (ISPE) mechanism. In this work, we perform a combined fit to the experimental data of the cross section of e+e−→ψ(3686)π+π−e^+ e^- \to \psi(3686) \pi^+ \pi^-, and the corresponding ψ(3686)π±\psi(3686)\pi^\pm and dipion invariant mass spectra. Our result shows that the observed charged charmonium-like structure in e+e−→ψ(3686)π+π−e^+ e^- \to \psi(3686) \pi^+ \pi^- can be well reproduced based on the ISPE mechanism, and that the corresponding dipion invariant mass spectrum and cross section can be depicted with the same parameters. In fact, it provides strong evidence that the ISPE mechanism can be an underlying mechanism resulting in such novel a phenomenon.Comment: 11 pages, 8 figures and 4 table

    Regge-like relation and a universal description of heavy-light systems

    Full text link
    Using the Regge-like formula (M−mQ)2=πσL(M-m_Q)^2=\pi\sigma L between hadron mass MM and angular momentum LL with a heavy quark mass mQm_Q and a string tension σ\sigma, we analyze all the heavy-light systems, i.e., D/Ds/B/BsD/D_s/B/B_s mesons and charmed and bottom baryons.Numerical plots are obtained for all the heavy-light mesons of experimental data whose slope becomes nearly equal to 1/2 of that for light hadrons. Assuming that charmed and bottom baryons consist of one heavy quark and one light cluster of two light quarks (diquark), we apply the formula to all the heavy-light baryons including recently discovered Ωc\Omega_c's and find that these baryons experimentally measured satisfy the above formula. We predict the average mass values of BB, BsB_s, Λb\Lambda_b, Σc\Sigma_c, Ξc\Xi_c, and Ωc\Omega_c with L=2L=2 as 6.01, 6.13, 6.15, 3.05, 3.07, and 3.34 GeV, respectively. Our results on baryons suggest that these baryons can be safely regarded as heavy quark-light cluster configuration. We also find a universal description for all the heavy-light mesons as well as baryons, i.e., one unique line is enough to describe both of charmed and bottom heavy-light systems. Our results suggest that instead of mass itself, gluon flux energy is essential to obtain a linear trajectory.Comment: 10 pages, 8 figures, 5 table

    Sparse Attention-Based Neural Networks for Code Classification

    Full text link
    Categorizing source codes accurately and efficiently is a challenging problem in real-world programming education platform management. In recent years, model-based approaches utilizing abstract syntax trees (ASTs) have been widely applied to code classification tasks. We introduce an approach named the Sparse Attention-based neural network for Code Classification (SACC) in this paper. The approach involves two main steps: In the first step, source code undergoes syntax parsing and preprocessing. The generated abstract syntax tree is split into sequences of subtrees and then encoded using a recursive neural network to obtain a high-dimensional representation. This step simultaneously considers both the logical structure and lexical level information contained within the code. In the second step, the encoded sequences of subtrees are fed into a Transformer model that incorporates sparse attention mechanisms for the purpose of classification. This method efficiently reduces the computational cost of the self-attention mechanisms, thus improving the training speed while preserving effectiveness. Our work introduces a carefully designed sparse attention pattern that is specifically designed to meet the unique needs of code classification tasks. This design helps reduce the influence of redundant information and enhances the overall performance of the model. Finally, we also deal with problems in previous related research, which include issues like incomplete classification labels and a small dataset size. We annotated the CodeNet dataset with algorithm-related labeling categories, which contains a significantly large amount of data. Extensive comparative experimental results demonstrate the effectiveness and efficiency of SACC for the code classification tasks.Comment: 2023 3rd International Conference on Digital Society and Intelligent Systems (DSInS 2023
    • …
    corecore