7,548 research outputs found
Work Distributions in 1-D Fermions and Bosons with Dual Contact Interactions
We extend the well-known static duality \cite{girardeau1960relationship,
cheon1999fermion} between 1-D Bosons and 1-D Fermions to the dynamical version.
By utilizing this dynamical duality we find the duality of non-equilibrium work
distributions between interacting 1-D bosonic (Lieb-Liniger model) and 1-D
fermionic (Cheon-Shigehara model) systems with dual contact interactions. As a
special case, the work distribution of the Tonks-Girardeau (TG) gas is
identical to that of 1-D free fermionic system even though their momentum
distributions are significantly different. In the classical limit, the work
distributions of Lieb-Liniger models (Cheon-Shigehara models) with arbitrary
coupling strength converge to that of the 1-D noninteracting distinguishable
particles, although their elemetary excitations (quasi-particles) obey
different statistics, e.g. the Bose-Einstein, the Fermi-Dirac and the
fractional statistics. We also present numerical results of the work
distributions of Lieb-Liniger model with various coupling strengths, which
demonstrate the convergence of work distributions in the classical limit.Comment: 8 pages, 2 figure, 2 table
Knowledge Graph Embedding with Iterative Guidance from Soft Rules
Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of
current research. Combining such an embedding model with logic rules has
recently attracted increasing attention. Most previous attempts made a one-time
injection of logic rules, ignoring the interactive nature between embedding
learning and logical inference. And they focused only on hard rules, which
always hold with no exception and usually require extensive manual effort to
create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a
novel paradigm of KG embedding with iterative guidance from soft rules. RUGE
enables an embedding model to learn simultaneously from 1) labeled triples that
have been directly observed in a given KG, 2) unlabeled triples whose labels
are going to be predicted iteratively, and 3) soft rules with various
confidence levels extracted automatically from the KG. In the learning process,
RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and
integrates such newly labeled triples to update the embedding model. Through
this iterative procedure, knowledge embodied in logic rules may be better
transferred into the learned embeddings. We evaluate RUGE in link prediction on
Freebase and YAGO. Experimental results show that: 1) with rule knowledge
injected iteratively, RUGE achieves significant and consistent improvements
over state-of-the-art baselines; and 2) despite their uncertainties,
automatically extracted soft rules are highly beneficial to KG embedding, even
those with moderate confidence levels. The code and data used for this paper
can be obtained from https://github.com/iieir-km/RUGE.Comment: To appear in AAAI 201
- …
