5,963 research outputs found
Inference and Evaluation of the Multinomial Mixture Model for Text Clustering
In this article, we investigate the use of a probabilistic model for
unsupervised clustering in text collections. Unsupervised clustering has become
a basic module for many intelligent text processing applications, such as
information retrieval, text classification or information extraction. The model
considered in this contribution consists of a mixture of multinomial
distributions over the word counts, each component corresponding to a different
theme. We present and contrast various estimation procedures, which apply both
in supervised and unsupervised contexts. In supervised learning, this work
suggests a criterion for evaluating the posterior odds of new documents which
is more statistically sound than the "naive Bayes" approach. In an unsupervised
context, we propose measures to set up a systematic evaluation framework and
start with examining the Expectation-Maximization (EM) algorithm as the basic
tool for inference. We discuss the importance of initialization and the
influence of other features such as the smoothing strategy or the size of the
vocabulary, thereby illustrating the difficulties incurred by the high
dimensionality of the parameter space. We also propose a heuristic algorithm
based on iterative EM with vocabulary reduction to solve this problem. Using
the fact that the latent variables can be analytically integrated out, we
finally show that Gibbs sampling algorithm is tractable and compares favorably
to the basic expectation maximization approach
Construction and evaluation of classifiers for forensic document analysis
In this study we illustrate a statistical approach to questioned document
examination. Specifically, we consider the construction of three classifiers
that predict the writer of a sample document based on categorical data. To
evaluate these classifiers, we use a data set with a large number of writers
and a small number of writing samples per writer. Since the resulting
classifiers were found to have near perfect accuracy using leave-one-out
cross-validation, we propose a novel Bayesian-based cross-validation method for
evaluating the classifiers.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS379 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Comparing SVM and Naive Bayes classifiers for text categorization with Wikitology as knowledge enrichment
The activity of labeling of documents according to their content is known as
text categorization. Many experiments have been carried out to enhance text
categorization by adding background knowledge to the document using knowledge
repositories like Word Net, Open Project Directory (OPD), Wikipedia and
Wikitology. In our previous work, we have carried out intensive experiments by
extracting knowledge from Wikitology and evaluating the experiment on Support
Vector Machine with 10- fold cross-validations. The results clearly indicate
Wikitology is far better than other knowledge bases. In this paper we are
comparing Support Vector Machine (SVM) and Na\"ive Bayes (NB) classifiers under
text enrichment through Wikitology. We validated results with 10-fold cross
validation and shown that NB gives an improvement of +28.78%, on the other hand
SVM gives an improvement of +6.36% when compared with baseline results. Na\"ive
Bayes classifier is better choice when external enriching is used through any
external knowledge base.Comment: 5 page
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts
Sentiment analysis seeks to identify the viewpoint(s) underlying a text span;
an example application is classifying a movie review as "thumbs up" or "thumbs
down". To determine this sentiment polarity, we propose a novel
machine-learning method that applies text-categorization techniques to just the
subjective portions of the document. Extracting these portions can be
implemented using efficient techniques for finding minimum cuts in graphs; this
greatly facilitates incorporation of cross-sentence contextual constraints.Comment: Data available at
http://www.cs.cornell.edu/people/pabo/movie-review-data
- …