3,519 research outputs found

    Possible Deuteron-like Molecular States Composed of Heavy Baryons

    Full text link
    We perform a systematic study of the possible loosely bound states composed of two charmed baryons or a charmed baryon and an anti-charmed baryon within the framework of the one boson exchange (OBE) model. We consider not only the π\pi exchange but also the η\eta, ρ\rho, ω\omega, ϕ\phi and σ\sigma exchanges. The SDS-D mixing effects for the spin-triplets are also taken into account. With the derived effective potentials, we calculate the binding energies and root-mean-square (RMS) radii for the systems ΛcΛc(Λˉc)\Lambda_c\Lambda_c(\bar{\Lambda}_c), ΞcΞc(Ξˉc)\Xi_c\Xi_c(\bar{\Xi}_c), ΣcΣc(Σˉc)\Sigma_c\Sigma_c(\bar{\Sigma}_c), ΞcΞc(Ξˉc)\Xi_c^\prime\Xi_c^\prime(\bar{\Xi}_c^\prime) and ΩcΩc(Ωˉc)\Omega_c\Omega_c(\bar{\Omega}_c). Our numerical results indicate that: (1) the H-dibaryon-like state ΛcΛc\Lambda_c\Lambda_c does not exist; (2) there may exist four loosely bound deuteron-like states ΞcΞc\Xi_c\Xi_c and ΞcΞc\Xi_c^\prime\Xi_c^\prime with small binding energies and large RMS radii.Comment: 17 pages, 32 figure

    Dynamic Alignment Mask CTC: Improved Mask-CTC with Aligned Cross Entropy

    Full text link
    Because of predicting all the target tokens in parallel, the non-autoregressive models greatly improve the decoding efficiency of speech recognition compared with traditional autoregressive models. In this work, we present dynamic alignment Mask CTC, introducing two methods: (1) Aligned Cross Entropy (AXE), finding the monotonic alignment that minimizes the cross-entropy loss through dynamic programming, (2) Dynamic Rectification, creating new training samples by replacing some masks with model predicted tokens. The AXE ignores the absolute position alignment between prediction and ground truth sentence and focuses on tokens matching in relative order. The dynamic rectification method makes the model capable of simulating the non-mask but possible wrong tokens, even if they have high confidence. Our experiments on WSJ dataset demonstrated that not only AXE loss but also the rectification method could improve the WER performance of Mask CTC.Comment: Accepted by ICASSP 202

    Contrastive Latent Space Reconstruction Learning for Audio-Text Retrieval

    Full text link
    Cross-modal retrieval (CMR) has been extensively applied in various domains, such as multimedia search engines and recommendation systems. Most existing CMR methods focus on image-to-text retrieval, whereas audio-to-text retrieval, a less explored domain, has posed a great challenge due to the difficulty to uncover discriminative features from audio clips and texts. Existing studies are restricted in the following two ways: 1) Most researchers utilize contrastive learning to construct a common subspace where similarities among data can be measured. However, they considers only cross-modal transformation, neglecting the intra-modal separability. Besides, the temperature parameter is not adaptively adjusted along with semantic guidance, which degrades the performance. 2) These methods do not take latent representation reconstruction into account, which is essential for semantic alignment. This paper introduces a novel audio-text oriented CMR approach, termed Contrastive Latent Space Reconstruction Learning (CLSR). CLSR improves contrastive representation learning by taking intra-modal separability into account and adopting an adaptive temperature control strategy. Moreover, the latent representation reconstruction modules are embedded into the CMR framework, which improves modal interaction. Experiments in comparison with some state-of-the-art methods on two audio-text datasets have validated the superiority of CLSR.Comment: Accepted by The 35th IEEE International Conference on Tools with Artificial Intelligence. (ICTAI 2023
    corecore