15,158 research outputs found
The puzzle of anomalously large isospin violations in
The BES-III Collaboration recently report the observation of anomalously
large isospin violations in , where the in the invariant mass
spectrum appears to be much narrower ( 10 MeV) than the peak width
(50 MeV) measured in other processes. We show that a mechanism, named as
triangle singularity (TS), can produce a narrow enhancement between the charged
and neutral thresholds, i.e., . It can also
lead to different invariant mass spectra for
and , which can possibly explain the long-standing puzzle
about the need for two close states and in
and , respectively. The TS could be a key to our
understanding of the nature of and advance our knowledge
about the mixing between and .Comment: 4 pages and 7 eps figures; Journal-matched versio
Rethinking Attention Mechanism in Time Series Classification
Attention-based models have been widely used in many areas, such as computer
vision and natural language processing. However, relevant applications in time
series classification (TSC) have not been explored deeply yet, causing a
significant number of TSC algorithms still suffer from general problems of
attention mechanism, like quadratic complexity. In this paper, we promote the
efficiency and performance of the attention mechanism by proposing our flexible
multi-head linear attention (FMLA), which enhances locality awareness by
layer-wise interactions with deformable convolutional blocks and online
knowledge distillation. What's more, we propose a simple but effective mask
mechanism that helps reduce the noise influence in time series and decrease the
redundancy of the proposed FMLA by masking some positions of each given series
proportionally. To stabilize this mechanism, samples are forwarded through the
model with random mask layers several times and their outputs are aggregated to
teach the same model with regular mask layers. We conduct extensive experiments
on 85 UCR2018 datasets to compare our algorithm with 11 well-known ones and the
results show that our algorithm has comparable performance in terms of top-1
accuracy. We also compare our model with three Transformer-based models with
respect to the floating-point operations per second and number of parameters
and find that our algorithm achieves significantly better efficiency with lower
complexity
- …