3,291 research outputs found
Hybrid Model For Word Prediction Using Naive Bayes and Latent Information
Historically, the Natural Language Processing area has been given too much
attention by many researchers. One of the main motivation beyond this interest
is related to the word prediction problem, which states that given a set words
in a sentence, one can recommend the next word. In literature, this problem is
solved by methods based on syntactic or semantic analysis. Solely, each of
these analysis cannot achieve practical results for end-user applications. For
instance, the Latent Semantic Analysis can handle semantic features of text,
but cannot suggest words considering syntactical rules. On the other hand,
there are models that treat both methods together and achieve state-of-the-art
results, e.g. Deep Learning. These models can demand high computational effort,
which can make the model infeasible for certain types of applications. With the
advance of the technology and mathematical models, it is possible to develop
faster systems with more accuracy. This work proposes a hybrid word suggestion
model, based on Naive Bayes and Latent Semantic Analysis, considering
neighbouring words around unfilled gaps. Results show that this model could
achieve 44.2% of accuracy in the MSR Sentence Completion Challenge
Altitude Training: Strong Bounds for Single-Layer Dropout
Dropout training, originally designed for deep neural networks, has been
successful on high-dimensional single-layer natural language tasks. This paper
proposes a theoretical explanation for this phenomenon: we show that, under a
generative Poisson topic model with long documents, dropout training improves
the exponent in the generalization bound for empirical risk minimization.
Dropout achieves this gain much like a marathon runner who practices at
altitude: once a classifier learns to perform reasonably well on training
examples that have been artificially corrupted by dropout, it will do very well
on the uncorrupted test set. We also show that, under similar conditions,
dropout preserves the Bayes decision boundary and should therefore induce
minimal bias in high dimensions.Comment: Advances in Neural Information Processing Systems (NIPS), 201
Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression
Although fully generative models have been successfully used to model the
contents of text documents, they are often awkward to apply to combinations of
text data and document metadata. In this paper we propose a
Dirichlet-multinomial regression (DMR) topic model that includes a log-linear
prior on document-topic distributions that is a function of observed features
of the document, such as author, publication venue, references, and dates. We
show that by selecting appropriate features, DMR topic models can meet or
exceed the performance of several previously published topic models designed
for specific data.Comment: Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty
in Artificial Intelligence (UAI2008
Social Media Based Deep Auto-Encoder Model for Clinical Recommendation
One of the most actively studied topics in modern medicine is the use of deep learning and patient clinical data to make medication and ADR recommendations. However, the clinical community still has some work to do in order to build a model that hybridises the recommendation system. As a social media learning based deep auto-encoder model for clinical recommendation, this research proposes a hybrid model that combines deep self-decoder with Top n similar co-patient information to produce a joint optimisation function (SAeCR). Implicit clinical information can be extracted using the network representation learning technique. Three experiments were conducted on two real-world social network data sets to assess the efficacy of the SAeCR model. As demonstrated by the experiments, the suggested model outperforms the other classification method on a larger and sparser data set. In addition, social network data can help doctors determine the nature of a patient's relationship with a co-patient. The SAeCR model is more effective since it incorporates insights from network representation learning and social theory
A Winnow-Based Approach to Context-Sensitive Spelling Correction
A large class of machine-learning problems in natural language require the
characterization of linguistic context. Two characteristic properties of such
problems are that their feature space is of very high dimensionality, and their
target concepts refer to only a small subset of the features in the space.
Under such conditions, multiplicative weight-update algorithms such as Winnow
have been shown to have exceptionally good theoretical properties. We present
an algorithm combining variants of Winnow and weighted-majority voting, and
apply it to a problem in the aforementioned class: context-sensitive spelling
correction. This is the task of fixing spelling errors that happen to result in
valid words, such as substituting "to" for "too", "casual" for "causal", etc.
We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a
statistics-based method representing the state of the art for this task. We
find: (1) When run with a full (unpruned) set of features, WinSpell achieves
accuracies significantly higher than BaySpell was able to achieve in either the
pruned or unpruned condition; (2) When compared with other systems in the
literature, WinSpell exhibits the highest performance; (3) The primary reason
that WinSpell outperforms BaySpell is that WinSpell learns a better linear
separator; (4) When run on a test set drawn from a different corpus than the
training set was drawn from, WinSpell is better able than BaySpell to adapt,
using a strategy we will present that combines supervised learning on the
training set with unsupervised learning on the (noisy) test set.Comment: To appear in Machine Learning, Special Issue on Natural Language
Learning, 1999. 25 page
- …