24,741 research outputs found
Isospin and a possible interpretation of the newly observed X(1576)
Recently, the BES collaboration observed a broad resonant structure X(1576)
with a large width being around 800 MeV and assigned its number to
. We show that the isospin of this resonant structure should be
assigned to 1. This state might be a molecule state or a tetraquark state. We
study the consequences of a possible - molecular
interpretation. In this scenario, the broad width can easily be understood. By
using the data of , the branching
ratios and are further estimated in this molecular
state scenario. It is shown that the decay mode should have a
much larger branching ratio than the decay mode has. As a
consequence, this resonant structure should also be seen in the and processes, especially in
the former process. Carefully searching this resonant structure in the
and decays should
be important for understanding the structure of X(1567).Comment: 5 pages, ReVTeX4, 3 figures. Version accepted for publication as a
brief report in Phys. Rev.
Contractor renormalization group theory of the SU() chains and ladders
Contractor renormalization group (CORE) method is applied to the SU()
chain and ladders in this paper. In our designed schemes, we show that these
two classes of systems can return to their original form of Hamiltonian after
CORE transformation. Successive iteration of the transformation leads to a
fixed point so that the ground state energy and the energy gap to the ground
state can be deduced. The result of SU() chain is compared with the one by
Bethe ansatz method. The transformation on spin-1/2 ladders gives a finite gap
in the excited energy spectra to the ground state in an intuitive way. The
application to SU(3) ladders is also discussed.Comment: 4 pages, 4 figures, submitted to Phys. Rev.
Show, Attend and Read: A Simple and Strong Baseline for Irregular Text Recognition
Recognizing irregular text in natural scene images is challenging due to the
large variance in text appearance, such as curvature, orientation and
distortion. Most existing approaches rely heavily on sophisticated model
designs and/or extra fine-grained annotations, which, to some extent, increase
the difficulty in algorithm implementation and data collection. In this work,
we propose an easy-to-implement strong baseline for irregular scene text
recognition, using off-the-shelf neural network components and only word-level
annotations. It is composed of a -layer ResNet, an LSTM-based
encoder-decoder framework and a 2-dimensional attention module. Despite its
simplicity, the proposed method is robust and achieves state-of-the-art
performance on both regular and irregular scene text recognition benchmarks.
Code is available at: https://tinyurl.com/ShowAttendReadComment: Accepted to Proc. AAAI Conference on Artificial Intelligence 201
- …