96,201 research outputs found
The Strong Decays of Orbitally Excited Mesons by Improved Bethe-Salpeter Method
We calculate the masses and the strong decays of orbitally excited states
, , and by the improved Bethe-Salpeter
method. The predicted masses of and are
, . We
calculate the isospin symmetry violating decay processes
and through mixing and get small widths.
Considering the uncertainties of the masses, for and , we
also calculate the OZI allowed decay channels: and
. For and , the OZI allowed decay
channels , and are studied. In all the decay channels, the reduction formula, PCAC relation
and low energy theorem are used to estimate the decay widths. We also obtain
the strong coupling constants , ,
, , ,
, , ,
and .Comment: 21 pages, 1 figure, 4 table
The rare semi-leptonic decays involving orbitally excited final mesons
The rare processes , where
stands for the final meson ,
,~, ,
or~, are studied within the Standard Model. The hadronic matrix
elements are evaluated in the Bethe-Salpeter approach and furthermore a
discussion on the gauge-invariant condition of the annihilation hadronic
currents is presented. Considering the penguin, box, annihilation,
color-favored cascade and color-suppressed cascade contributions, the
observables , , and are
calculated
LRMM: Learning to Recommend with Missing Modalities
Multimodal learning has shown promising performance in content-based
recommendation due to the auxiliary user and item information of multiple
modalities such as text and images. However, the problem of incomplete and
missing modality is rarely explored and most existing methods fail in learning
a recommendation model with missing or corrupted modalities. In this paper, we
propose LRMM, a novel framework that mitigates not only the problem of missing
modalities but also more generally the cold-start problem of recommender
systems. We propose modality dropout (m-drop) and a multimodal sequential
autoencoder (m-auto) to learn multimodal representations for complementing and
imputing missing modalities. Extensive experiments on real-world Amazon data
show that LRMM achieves state-of-the-art performance on rating prediction
tasks. More importantly, LRMM is more robust to previous methods in alleviating
data-sparsity and the cold-start problem.Comment: 11 pages, EMNLP 201
- …
