12,876 research outputs found

    Thermal entanglement in a two-spin-qutrit system under a nonuniform external magnetic field

    Full text link
    The thermal entanglement in a two-spin-qutrit system with two spins coupled by exchange interaction under a magnetic field in an arbitrary direction is investigated. Negativity, the measurement of entanglement, is calculated. We find that for any temperature the evolvement of negativity is symmetric with respect to magnetic field. The behavior of negativity is presented for four different cases. The results show that for different temperature, different magnetic field give maximum entanglement. Both the parallel and antiparallel magnetic field cases are investigated qualitatively (not quantitatively) in detail, we find that the entanglement may be enhanced under an antiparallel magnetic field.Comment: 2 eps figure

    Knowledge Graph Embedding with Iterative Guidance from Soft Rules

    Full text link
    Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.Comment: To appear in AAAI 201
    • …
    corecore