3 research outputs found
A Study on Neural Network Language Modeling
An exhaustive study on neural network language modeling (NNLM) is performed
in this paper. Different architectures of basic neural network language models
are described and examined. A number of different improvements over basic
neural network language models, including importance sampling, word classes,
caching and bidirectional recurrent neural network (BiRNN), are studied
separately, and the advantages and disadvantages of every technique are
evaluated. Then, the limits of neural network language modeling are explored
from the aspects of model architecture and knowledge representation. Part of
the statistical information from a word sequence will loss when it is processed
word by word in a certain order, and the mechanism of training neural network
by updating weight matrixes and vectors imposes severe restrictions on any
significant enhancement of NNLM. For knowledge representation, the knowledge
represented by neural network language models is the approximate probabilistic
distribution of word sequences from a certain training data set rather than the
knowledge of a language itself or the information conveyed by word sequences in
a natural language. Finally, some directions for improving neural network
language modeling further is discussed.Comment: 20 pages, 6 figure
A Survey on Neural Network Language Models
As the core component of Natural Language Processing (NLP) system, Language
Model (LM) can provide word representation and probability indication of word
sequences. Neural Network Language Models (NNLMs) overcome the curse of
dimensionality and improve the performance of traditional LMs. A survey on
NNLMs is performed in this paper. The structure of classic NNLMs is described
firstly, and then some major improvements are introduced and analyzed. We
summarize and compare corpora and toolkits of NNLMs. Further, some research
directions of NNLMs are discussed
Joint Embedding Learning of Educational Knowledge Graphs
As an efficient model for knowledge organization, the knowledge graph has
been widely adopted in several fields, e.g., biomedicine, sociology, and
education. And there is a steady trend of learning embedding representations of
knowledge graphs to facilitate knowledge graph construction and downstream
tasks. In general, knowledge graph embedding techniques aim to learn vectorized
representations which preserve the structural information of the graph. And
conventional embedding learning models rely on structural relationships among
entities and relations. However, in educational knowledge graphs, structural
relationships are not the focus. Instead, rich literals of the graphs are more
valuable. In this paper, we focus on this problem and propose a novel model for
embedding learning of educational knowledge graphs. Our model considers both
structural and literal information and jointly learns embedding
representations. Three experimental graphs were constructed based on an
educational knowledge graph which has been applied in real-world teaching. We
conducted two experiments on the three graphs and other common benchmark
graphs. The experimental results proved the effectiveness of our model and its
superiority over other baselines when processing educational knowledge graphs