692 research outputs found
Learning from Ontology Streams with Semantic Concept Drift
Data stream learning has been largely studied for extracting knowledge
structures from continuous and rapid data records. In the semantic Web, data is
interpreted in ontologies and its ordered sequence is represented as an
ontology stream. Our work exploits the semantics of such streams to tackle the
problem of concept drift i.e., unexpected changes in data distribution, causing
most of models to be less accurate as time passes. To this end we revisited (i)
semantic inference in the context of supervised stream learning, and (ii)
models with semantic embeddings. The experiments show accurate prediction with
data from Dublin and Beijing
Knowledge-based Transfer Learning Explanation
Machine learning explanation can significantly boost machine learning's
application in decision making, but the usability of current methods is limited
in human-centric explanation, especially for transfer learning, an important
machine learning branch that aims at utilizing knowledge from one learning
domain (i.e., a pair of dataset and prediction task) to enhance prediction
model training in another learning domain. In this paper, we propose an
ontology-based approach for human-centric explanation of transfer learning.
Three kinds of knowledge-based explanatory evidence, with different
granularities, including general factors, particular narrators and core
contexts are first proposed and then inferred with both local ontologies and
external knowledge bases. The evaluation with US flight data and DBpedia has
presented their confidence and availability in explaining the transferability
of feature representation in flight departure delay forecasting.Comment: Accepted by International Conference on Principles of Knowledge
Representation and Reasoning, 201
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
A capsule is a group of neurons, whose activity vector represents the
instantiation parameters of a specific type of entity. In this paper, we
explore the capsule networks used for relation extraction in a multi-instance
multi-label learning framework and propose a novel neural approach based on
capsule networks with attention mechanisms. We evaluate our method with
different benchmarks, and it is demonstrated that our method improves the
precision of the predicted relations. Particularly, we show that capsule
networks improve multiple entity pairs relation extraction.Comment: To be published in EMNLP 201
Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks
We propose a distance supervised relation extraction approach for
long-tailed, imbalanced data which is prevalent in real-world settings. Here,
the challenge is to learn accurate "few-shot" models for classes existing at
the tail of the class distribution, for which little data is available.
Inspired by the rich semantic correlations between classes at the long tail and
those at the head, we take advantage of the knowledge from data-rich classes at
the head of the distribution to boost the performance of the data-poor classes
at the tail. First, we propose to leverage implicit relational knowledge among
class labels from knowledge graph embeddings and learn explicit relational
knowledge using graph convolution networks. Second, we integrate that
relational knowledge into relation extraction model by coarse-to-fine
knowledge-aware attention mechanism. We demonstrate our results for a
large-scale benchmark dataset which show that our approach significantly
outperforms other baselines, especially for long-tail relations.Comment: To be published in NAACL 201
Semantic Web for data harmonization in Chinese medicine
Scientific studies to investigate Chinese medicine with Western medicine have been generating a large amount of data to be shared preferably under a global data standard. This article provides an overview of Semantic Web and identifies some representative Semantic Web applications in Chinese medicine. Semantic Web is proposed as a standard for representing Chinese medicine data and facilitating their integration with Western medicine data
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
Research on the Transport and Deposition of Nanoparticles in a Rotating Curved Pipe
A finite-volume code and the SIMPLE scheme are used to study the transport and deposition of nanoparticles in a rotating curved pipe for different angular velocities, Dean numbers, and Schmidt numbers. The results show that when the Schmidt number is small, the nanoparticle distributions are mostly determined by the axial velocity. When the Schmidt number is many orders of magnitude larger than 1, the secondary flow will dominate the nanoparticle distribution. When the pipe corotates, the distribution of nanoparticle mass fraction is similar to that for the stationary case. There is a “hot spot” deposition region near the outside edge of bend. When the pipe counter-rotates, the Coriolis force pushes the region with high value of nanoparticle mass fraction toward inside edge of the bend. The hot spot deposition region appears inside the edge. The particle deposition over the whole edge of the bend becomes uniform as the Dean number increases. The corotation of pipe makes the particle deposition efficiency a reduction, while high counter-rotation of pipe only slightly affects the deposition efficiency. When two kinds of secondary flows are coexisting, the relative deposition efficiency is larger than that for the stationary case
- …