150,965 research outputs found
Background Knowledge Based Multi-Stream Neural Network for Text Classification
As a foundation and typical task in natural language processing, text classification has been widely applied in many fields. However, as the basis of text classification, most existing corpus are imbalanced and often result in the classifier tending its performance to those categories with more texts. In this paper, we propose a background knowledge based multi-stream neural network to make up for the imbalance or insufficient information caused by the limitations of training corpus. The multi-stream network mainly consists of the basal stream, which retained original sequence information, and background knowledge based streams. Background knowledge is composed of keywords and co-occurred words which are extracted from external corpus. Background knowledge based streams are devoted to realizing supplemental information and reinforce basal stream. To better fuse the features extracted from different streams, early-fusion and two after-fusion strategies are employed. According to the results obtained from both Chinese corpus and English corpus, it is demonstrated that the proposed background knowledge based multi-stream neural network performs well in classification tasks
Graph Convolutional Networks for Text Classification
Text classification is an important and classical problem in natural language
processing. There have been a number of studies that applied convolutional
neural networks (convolution on regular grid, e.g., sequence) to
classification. However, only a limited number of studies have explored the
more flexible graph convolutional neural networks (convolution on non-grid,
e.g., arbitrary graph) for the task. In this work, we propose to use graph
convolutional networks for text classification. We build a single text graph
for a corpus based on word co-occurrence and document word relations, then
learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text
GCN is initialized with one-hot representation for word and document, it then
jointly learns the embeddings for both words and documents, as supervised by
the known class labels for documents. Our experimental results on multiple
benchmark datasets demonstrate that a vanilla Text GCN without any external
word embeddings or knowledge outperforms state-of-the-art methods for text
classification. On the other hand, Text GCN also learns predictive word and
document embeddings. In addition, experimental results show that the
improvement of Text GCN over state-of-the-art comparison methods become more
prominent as we lower the percentage of training data, suggesting the
robustness of Text GCN to less training data in text classification.Comment: Accepted by 33rd AAAI Conference on Artificial Intelligence (AAAI
2019
Neural Graph Transfer Learning in Natural Language Processing Tasks
Natural language is essential in our daily lives as we rely on languages to communicate and exchange information. A fundamental goal for natural language processing (NLP) is to let the machine understand natural language to help or replace human experts to mine knowledge and complete tasks. Many NLP tasks deal with sequential data. For example, a sentence is considered as a sequence of works. Very recently, deep learning-based language models (i.e.,BERT \citep{devlin2018bert}) achieved significant improvement in many existing tasks, including text classification and natural language inference. However, not all tasks can be formulated using sequence models. Specifically, graph-structured data is also fundamental in NLP, including entity linking, entity classification, relation extraction, abstractive meaning representation, and knowledge graphs \citep{santoro2017simple,hamilton2017representation,kipf2016semi}. In this scenario, BERT-based pretrained models may not be suitable. Graph Convolutional Neural Network (GCN) \citep{kipf2016semi} is a deep neural network model designed for graphs. It has shown great potential in text classification, link prediction, question answering and so on. This dissertation presents novel graph models for NLP tasks, including text classification, prerequisite chain learning, and coreference resolution. We focus on different perspectives of graph convolutional network modeling: for text classification, a novel graph construction method is proposed which allows interpretability for the prediction; for prerequisite chain learning, we propose multiple aggregation functions that utilize neighbors for better information exchange; for coreference resolution, we study how graph pretraining can help when labeled data is limited. Moreover, an important branch is to apply pretrained language models for the mentioned tasks. So, this dissertation also focuses on the transfer learning method that generalizes pretrained models to other domains, including medical, cross-lingual, and web data. Finally, we propose a new task called unsupervised cross-domain prerequisite chain learning, and study novel graph-based methods to transfer knowledge over graphs
Context sensitive optical character recognition using neural networks and hidden Markov models
This thesis investigates a method for using contextual information in text recognition. This is based on the premise that, while reading, humans recognize words with missing or garbled characters by examining the surrounding characters and then selecting the appropriate character. The correct character is chosen based on an inherent knowledge of the language and spelling techniques. We can then model this statistically. The approach taken by this Thesis is to combine feature extraction techniques, Neural Networks and Hidden Markov Modeling. This method of character recognition involves a three step process: pixel image preprocessing, neural network classification and context interpretation. Pixel image preprocessing applies a feature extraction algorithm to original bit mapped images, which produces a feature vector for the original images which are input into a neural network. The neural network performs the initial classification of the characters by producing ten weights, one for each character. The magnitude of the weight is translated into the confidence the network has in each of the choices. The greater the magnitude and separation, the more confident the neural network is of a given choice. The output of the neural network is the input for a context interpreter. The context interpreter uses Hidden Markov Modeling (HMM) techniques to determine the most probable classification for all characters based on the characters that precede that character and character pair statistics. The HMMs are built using an a priori knowledge of the language: a statistical description of the probabilities of digrams. Experimentation and verification of this method combines the development and use of a preprocessor program, a Cascade Correlation Neural Network and a HMM context interpreter program. Results from these experiments show the neural network successfully classified 88.2 percent of the characters. Expanding this to the word level, 63 percent of the words were correctly identified. Adding the Hidden Markov Modeling improved the word recognition to 82.9 percent
Deep Learning for Technical Document Classification
In large technology companies, the requirements for managing and organizing
technical documents created by engineers and managers have increased
dramatically in recent years, which has led to a higher demand for more
scalable, accurate, and automated document classification. Prior studies have
only focused on processing text for classification, whereas technical documents
often contain multimodal information. To leverage multimodal information for
document classification to improve the model performance, this paper presents a
novel multimodal deep learning architecture, TechDoc, which utilizes three
types of information, including natural language texts and descriptive images
within documents and the associations among the documents. The architecture
synthesizes the convolutional neural network, recurrent neural network, and
graph neural network through an integrated training process. We applied the
architecture to a large multimodal technical document database and trained the
model for classifying documents based on the hierarchical International Patent
Classification system. Our results show that TechDoc presents a greater
classification accuracy than the unimodal methods and other state-of-the-art
benchmarks. The trained model can potentially be scaled to millions of
real-world multimodal technical documents, which is useful for data and
knowledge management in large technology companies and organizations.Comment: 16 pages, 8 figures, 9 table
Clinical Text Classification with Rule-based Features and Knowledge-guided Convolutional Neural Networks
Clinical text classification is an important problem in medical natural
language processing. Existing studies have conventionally focused on rules or
knowledge sources-based feature engineering, but only a few have exploited
effective feature learning capability of deep learning methods. In this study,
we propose a novel approach which combines rule-based features and
knowledge-guided deep learning techniques for effective disease classification.
Critical Steps of our method include identifying trigger phrases, predicting
classes with very few examples using trigger phrases and training a
convolutional neural network with word embeddings and Unified Medical Language
System (UMLS) entity embeddings. We evaluated our method on the 2008
Integrating Informatics with Biology and the Bedside (i2b2) obesity challenge.
The results show that our method outperforms the state of the art methods.Comment: arXiv admin note: text overlap with arXiv:1806.04820 by other author
Retrieval-Augmented Meta Learning for Low-Resource Text Classification
Meta learning have achieved promising performance in low-resource text
classification which aims to identify target classes with knowledge transferred
from source classes with sets of small tasks named episodes. However, due to
the limited training data in the meta-learning scenario and the inherent
properties of parameterized neural networks, poor generalization performance
has become a pressing problem that needs to be addressed. To deal with this
issue, we propose a meta-learning based method called Retrieval-Augmented Meta
Learning(RAML). It not only uses parameterization for inference but also
retrieves non-parametric knowledge from an external corpus to make inferences,
which greatly alleviates the problem of poor generalization performance caused
by the lack of diverse training data in meta-learning. This method differs from
previous models that solely rely on parameters, as it explicitly emphasizes the
importance of non-parametric knowledge, aiming to strike a balance between
parameterized neural networks and non-parametric knowledge. The model is
required to determine which knowledge to access and utilize during inference.
Additionally, our multi-view passages fusion network module can effectively and
efficiently integrate the retrieved information into low-resource
classification task. The extensive experiments demonstrate that RAML
significantly outperforms current SOTA low-resource text classification models.Comment: Under Revie
- …