36,874 research outputs found

    Possible S-wave Dibaryons in SU(3) Chiral Quark Model

    Full text link
    In the framework of the SU(3) chiral quark model, the S−S-wave baryon-baryon bound states are investigated. It is found that according to the symmetry character of the system and the contributions from chiral fields, there are three types of bound states. The states of the first type, such as [ΩΩ](0,0)[\Omega\Omega]_{(0,0)} and [Ξ∗Ω](0,1/2)[\Xi^{*}\Omega]_{(0,1/2)} are deeply bound dibaryon with narrow widths. The second type states, [Σ∗Δ](0,5/2)[\Sigma^{*} \Delta]_{(0,5/2)},[Σ∗Δ](3,1/2)[\Sigma^{*} \Delta]_{(3,1/2)}, [ΔΔ](0,3)[\Delta\Delta]_{(0,3)} and [ΔΔ](3,0)[\Delta\Delta]_{(3,0)} are also bound states, but with broad widths. [ΞΩ−Ξ∗Ω](1,1/2)[\Xi\Omega - \Xi^{*}\Omega]_{(1,1/2)}, [ΞΞ](0,1)[\Xi\Xi]_{(0,1)}, and [NΩ](2,1/2)[N \Omega]_{(2,1/2)} are third type states. They, like {\em d}, are weakly bound only if the chiral fields can provide attraction between baryons.Comment: Latex files, 1 figur

    Weakly-Supervised Neural Text Classification

    Full text link
    Deep neural networks are gaining increasing popularity for the classic text classification task, due to their strong expressive power and less requirement for feature engineering. Despite such attractiveness, neural text classification models suffer from the lack of training data in many real-world applications. Although many semi-supervised and weakly-supervised text classification models exist, they cannot be easily applied to deep neural models and meanwhile support limited supervision types. In this paper, we propose a weakly-supervised method that addresses the lack of training data in neural text classification. Our method consists of two modules: (1) a pseudo-document generator that leverages seed information to generate pseudo-labeled documents for model pre-training, and (2) a self-training module that bootstraps on real unlabeled data for model refinement. Our method has the flexibility to handle different types of weak supervision and can be easily integrated into existing deep neural models for text classification. We have performed extensive experiments on three real-world datasets from different domains. The results demonstrate that our proposed method achieves inspiring performance without requiring excessive training data and outperforms baseline methods significantly.Comment: CIKM 2018 Full Pape

    Wasserstein Distance Guided Representation Learning for Domain Adaptation

    Full text link
    Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.Comment: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018
    • …
    corecore