1,991 research outputs found
Two-Level Text Classification Using Hybrid Machine Learning Techniques
Nowadays, documents are increasingly being associated with multi-level
category hierarchies rather than a flat category scheme. To access these
documents in real time, we need fast automatic methods to navigate these
hierarchies. Today’s vast data repositories such as the web also contain many
broad domains of data which are quite distinct from each other e.g. medicine,
education, sports and politics. Each domain constitutes a subspace of the data
within which the documents are similar to each other but quite distinct from the
documents in another subspace. The data within these domains is frequently
further divided into many subcategories.
Subspace Learning is a technique popular with non-text domains such as
image recognition to increase speed and accuracy. Subspace analysis lends
itself naturally to the idea of hybrid classifiers. Each subspace can be
processed by a classifier best suited to the characteristics of that particular
subspace. Instead of using the complete set of full space feature dimensions,
classifier performances can be boosted by using only a subset of the
dimensions.
This thesis presents a novel hybrid parallel architecture using separate
classifiers trained on separate subspaces to improve two-level text
classification. The classifier to be used on a particular input and the relevant
feature subset to be extracted is determined dynamically by using a novel
method based on the maximum significance value. A novel vector
representation which enhances the distinction between classes within the
subspace is also developed. This novel system, the Hybrid Parallel Classifier,
was compared against the baselines of several single classifiers such as the
Multilayer Perceptron and was found to be faster and have higher two-level
classification accuracies. The improvement in performance achieved was even
higher when dealing with more complex category hierarchies
Cross-Lingual Adaptation using Structural Correspondence Learning
Cross-lingual adaptation, a special case of domain adaptation, refers to the
transfer of classification knowledge between two languages. In this article we
describe an extension of Structural Correspondence Learning (SCL), a recently
proposed algorithm for domain adaptation, for cross-lingual adaptation. The
proposed method uses unlabeled documents from both languages, along with a word
translation oracle, to induce cross-lingual feature correspondences. From these
correspondences a cross-lingual representation is created that enables the
transfer of classification knowledge from the source to the target language.
The main advantages of this approach over other approaches are its resource
efficiency and task specificity.
We conduct experiments in the area of cross-language topic and sentiment
classification involving English as source language and German, French, and
Japanese as target languages. The results show a significant improvement of the
proposed method over a machine translation baseline, reducing the relative
error due to cross-lingual adaptation by an average of 30% (topic
classification) and 59% (sentiment classification). We further report on
empirical analyses that reveal insights into the use of unlabeled data, the
sensitivity with respect to important hyperparameters, and the nature of the
induced cross-lingual correspondences
Uncovering Meanings of Embeddings via Partial Orthogonality
Machine learning tools often rely on embedding text as vectors of real
numbers. In this paper, we study how the semantic structure of language is
encoded in the algebraic structure of such embeddings. Specifically, we look at
a notion of ``semantic independence'' capturing the idea that, e.g.,
``eggplant'' and ``tomato'' are independent given ``vegetable''. Although such
examples are intuitive, it is difficult to formalize such a notion of semantic
independence. The key observation here is that any sensible formalization
should obey a set of so-called independence axioms, and thus any algebraic
encoding of this structure should also obey these axioms. This leads us
naturally to use partial orthogonality as the relevant algebraic structure. We
develop theory and methods that allow us to demonstrate that partial
orthogonality does indeed capture semantic independence. Complementary to this,
we also introduce the concept of independence preserving embeddings where
embeddings preserve the conditional independence structures of a distribution,
and we prove the existence of such embeddings and approximations to them
- …