15,158 research outputs found

    The puzzle of anomalously large isospin violations in η(1405/1475)→3π\eta(1405/1475)\to 3\pi

    Full text link
    The BES-III Collaboration recently report the observation of anomalously large isospin violations in J/ψ→γη(1405/1475)→γπ0f0(980)→γ+3πJ/\psi\to \gamma\eta(1405/1475) \to \gamma \pi^0 f_0(980)\to \gamma +3\pi, where the f0(980)f_0(980) in the ππ\pi\pi invariant mass spectrum appears to be much narrower (∼\sim 10 MeV) than the peak width (∼\sim50 MeV) measured in other processes. We show that a mechanism, named as triangle singularity (TS), can produce a narrow enhancement between the charged and neutral KKˉK\bar{K} thresholds, i.e., 2mK±∼2mK02m_{K^\pm}\sim 2m_{K^0}. It can also lead to different invariant mass spectra for η(1405/1475)→a0(980)π\eta(1405/1475)\to a_0(980)\pi and KKˉ∗+c.c.K\bar{K}^*+c.c., which can possibly explain the long-standing puzzle about the need for two close states η(1405)\eta(1405) and η(1475)\eta(1475) in ηππ\eta\pi\pi and KKˉπK\bar{K}\pi, respectively. The TS could be a key to our understanding of the nature of η(1405/1475)\eta(1405/1475) and advance our knowledge about the mixing between a0(980)a_0(980) and f0(980)f_0(980).Comment: 4 pages and 7 eps figures; Journal-matched versio

    Rethinking Attention Mechanism in Time Series Classification

    Full text link
    Attention-based models have been widely used in many areas, such as computer vision and natural language processing. However, relevant applications in time series classification (TSC) have not been explored deeply yet, causing a significant number of TSC algorithms still suffer from general problems of attention mechanism, like quadratic complexity. In this paper, we promote the efficiency and performance of the attention mechanism by proposing our flexible multi-head linear attention (FMLA), which enhances locality awareness by layer-wise interactions with deformable convolutional blocks and online knowledge distillation. What's more, we propose a simple but effective mask mechanism that helps reduce the noise influence in time series and decrease the redundancy of the proposed FMLA by masking some positions of each given series proportionally. To stabilize this mechanism, samples are forwarded through the model with random mask layers several times and their outputs are aggregated to teach the same model with regular mask layers. We conduct extensive experiments on 85 UCR2018 datasets to compare our algorithm with 11 well-known ones and the results show that our algorithm has comparable performance in terms of top-1 accuracy. We also compare our model with three Transformer-based models with respect to the floating-point operations per second and number of parameters and find that our algorithm achieves significantly better efficiency with lower complexity
    • …
    corecore