39,407 research outputs found

    The rare semi-leptonic BcB_c decays involving orbitally excited final mesons

    Full text link
    The rare processes Bcβ†’D(s)J(βˆ—)ΞΌΞΌΛ‰B_c\to D_{(s)J} ^{(*)}\mu\bar{\mu}, where D(s)J(βˆ—)D_{(s)J}^{(*)} stands for the final meson Ds0βˆ—(2317)D_{s0}^*(2317), Ds1(2460,2536)D_{s1}(2460,2536),~Ds2βˆ—(2573)D_{s2}^*(2573), D0βˆ—(2400)D_0^*(2400), D1(2420,2430)D_{1}(2420,2430) or~D2βˆ—(2460)D_{2}^*(2460), are studied within the Standard Model. The hadronic matrix elements are evaluated in the Bethe-Salpeter approach and furthermore a discussion on the gauge-invariant condition of the annihilation hadronic currents is presented. Considering the penguin, box, annihilation, color-favored cascade and color-suppressed cascade contributions, the observables dBr/dQ2\text{d}Br/\text{d}Q^2, ALPLA_{LPL}, AFBA_{FB} and PLP_L are calculated

    On Multi-Relational Link Prediction with Bilinear Models

    Get PDF
    We study bilinear embedding models for the task of multi-relational link prediction and knowledge graph completion. Bilinear models belong to the most basic models for this task, they are comparably efficient to train and use, and they can provide good prediction performance. The main goal of this paper is to explore the expressiveness of and the connections between various bilinear models proposed in the literature. In particular, a substantial number of models can be represented as bilinear models with certain additional constraints enforced on the embeddings. We explore whether or not these constraints lead to universal models, which can in principle represent every set of relations, and whether or not there are subsumption relationships between various models. We report results of an independent experimental study that evaluates recent bilinear models in a common experimental setup. Finally, we provide evidence that relation-level ensembles of multiple bilinear models can achieve state-of-the art prediction performance

    LRMM: Learning to Recommend with Missing Modalities

    Full text link
    Multimodal learning has shown promising performance in content-based recommendation due to the auxiliary user and item information of multiple modalities such as text and images. However, the problem of incomplete and missing modality is rarely explored and most existing methods fail in learning a recommendation model with missing or corrupted modalities. In this paper, we propose LRMM, a novel framework that mitigates not only the problem of missing modalities but also more generally the cold-start problem of recommender systems. We propose modality dropout (m-drop) and a multimodal sequential autoencoder (m-auto) to learn multimodal representations for complementing and imputing missing modalities. Extensive experiments on real-world Amazon data show that LRMM achieves state-of-the-art performance on rating prediction tasks. More importantly, LRMM is more robust to previous methods in alleviating data-sparsity and the cold-start problem.Comment: 11 pages, EMNLP 201
    • …
    corecore