85,471 research outputs found
Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval
This paper presents a new state-of-the-art for document image classification
and retrieval, using features learned by deep convolutional neural networks
(CNNs). In object and scene analysis, deep neural nets are capable of learning
a hierarchical chain of abstraction from pixel inputs to concise and
descriptive representations. The current work explores this capacity in the
realm of document analysis, and confirms that this representation strategy is
superior to a variety of popular hand-crafted alternatives. Experiments also
show that (i) features extracted from CNNs are robust to compression, (ii) CNNs
trained on non-document images transfer well to document analysis tasks, and
(iii) enforcing region-specific feature-learning is unnecessary given
sufficient training data. This work also makes available a new labelled subset
of the IIT-CDIP collection, containing 400,000 document images across 16
categories, useful for training new CNNs for document analysis
Multi-task CNN Model for Attribute Prediction
This paper proposes a joint multi-task learning algorithm to better predict
attributes in images using deep convolutional neural networks (CNN). We
consider learning binary semantic attributes through a multi-task CNN model,
where each CNN will predict one binary attribute. The multi-task learning
allows CNN models to simultaneously share visual knowledge among different
attribute categories. Each CNN will generate attribute-specific feature
representations, and then we apply multi-task learning on the features to
predict their attributes. In our multi-task framework, we propose a method to
decompose the overall model's parameters into a latent task matrix and
combination matrix. Furthermore, under-sampled classifiers can leverage shared
statistics from other classifiers to improve their performance. Natural
grouping of attributes is applied such that attributes in the same group are
encouraged to share more knowledge. Meanwhile, attributes in different groups
will generally compete with each other, and consequently share less knowledge.
We show the effectiveness of our method on two popular attribute datasets.Comment: 11 pages, 3 figures, ieee transaction pape
CEAI: CCM based Email Authorship Identification Model
In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2]
Learning Convolutional Text Representations for Visual Question Answering
Visual question answering is a recently proposed artificial intelligence task
that requires a deep understanding of both images and texts. In deep learning,
images are typically modeled through convolutional neural networks, and texts
are typically modeled through recurrent neural networks. While the requirement
for modeling images is similar to traditional computer vision tasks, such as
object recognition and image classification, visual question answering raises a
different need for textual representation as compared to other natural language
processing tasks. In this work, we perform a detailed analysis on natural
language questions in visual question answering. Based on the analysis, we
propose to rely on convolutional neural networks for learning textual
representations. By exploring the various properties of convolutional neural
networks specialized for text data, such as width and depth, we present our
"CNN Inception + Gate" model. We show that our model improves question
representations and thus the overall accuracy of visual question answering
models. We also show that the text representation requirement in visual
question answering is more complicated and comprehensive than that in
conventional natural language processing tasks, making it a better task to
evaluate textual representation methods. Shallow models like fastText, which
can obtain comparable results with deep learning models in tasks like text
classification, are not suitable in visual question answering.Comment: Conference paper at SDM 2018. https://github.com/divelab/sva
Effective Unsupervised Author Disambiguation with Relative Frequencies
This work addresses the problem of author name homonymy in the Web of
Science. Aiming for an efficient, simple and straightforward solution, we
introduce a novel probabilistic similarity measure for author name
disambiguation based on feature overlap. Using the researcher-ID available for
a subset of the Web of Science, we evaluate the application of this measure in
the context of agglomeratively clustering author mentions. We focus on a
concise evaluation that shows clearly for which problem setups and at which
time during the clustering process our approach works best. In contrast to most
other works in this field, we are sceptical towards the performance of author
name disambiguation methods in general and compare our approach to the trivial
single-cluster baseline. Our results are presented separately for each correct
clustering size as we can explain that, when treating all cases together, the
trivial baseline and more sophisticated approaches are hardly distinguishable
in terms of evaluation results. Our model shows state-of-the-art performance
for all correct clustering sizes without any discriminative training and with
tuning only one convergence parameter.Comment: Proceedings of JCDL 201
- …