2,014 research outputs found
A Survey on Soft Subspace Clustering
Subspace clustering (SC) is a promising clustering technology to identify
clusters based on their associations with subspaces in high dimensional spaces.
SC can be classified into hard subspace clustering (HSC) and soft subspace
clustering (SSC). While HSC algorithms have been extensively studied and well
accepted by the scientific community, SSC algorithms are relatively new but
gaining more attention in recent years due to better adaptability. In the
paper, a comprehensive survey on existing SSC algorithms and the recent
development are presented. The SSC algorithms are classified systematically
into three main categories, namely, conventional SSC (CSSC), independent SSC
(ISSC) and extended SSC (XSSC). The characteristics of these algorithms are
highlighted and the potential future development of SSC is also discussed.Comment: This paper has been published in Information Sciences Journal in 201
Topic-based mixture language modelling
This paper describes an approach for constructing a mixture of language models based on simple statistical notions of semantics using probabilistic models developed for information retrieval. The approach encapsulates corpus-derived semantic information and is able to model varying styles of text. Using such information, the corpus texts are clustered in an unsupervised manner and a mixture of topic-specific language models is automatically created. The principal contribution of this work is to characterise the document space resulting from information retrieval techniques and to demonstrate the approach for mixture language modelling.
A comparison is made between manual and automatic clustering in order to elucidate how the global content information is expressed in the space. We also compare (in terms of association with manual clustering and language modelling accuracy) alternative term-weighting schemes and the effect of singular value decomposition dimension reduction (latent semantic analysis). Test set perplexity results using the British National Corpus indicate that the approach can improve the potential of statistical language modelling. Using an adaptive procedure, the conventional model may be tuned to track text data with a slight increase in computational cost
Multi-mode partitioning for text clustering to reduce dimensionality and noises
Co-clustering in text mining has been proposed to partition words and documents simultaneously. Although the
main advantage of this approach may improve interpretation of clusters on the data, there are still few proposals
on these methods; while one-way partition is even now widely utilized for information retrieval. In contrast to
structured information, textual data suffer of high dimensionality and sparse matrices, so it is strictly necessary
to pre-process texts for applying clustering techniques. In this paper, we propose a new procedure to reduce high
dimensionality of corpora and to remove the noises from the unstructured data. We test two different processes
to treat data applying two co-clustering algorithms; based on the results we present the procedure that provides
the best interpretation of the data
Using bag-of-concepts to improve the performance of support vector machines in text categorization
This paper investigates the use of concept-based representations for text categorization. We introduce a new approach to create concept-based text representations, and apply it to a standard text categorization collection. The representations are used as input to a Support Vector Machine classifier, and the results show that there are certain categories for which concept-based representations constitute a viable supplement to word-based ones. We also demonstrate how the performance of the Support Vector Machine can be improved by combining representations
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field
Crime incidents embedding using restricted Boltzmann machines
We present a new approach for detecting related crime series, by unsupervised
learning of the latent feature embeddings from narratives of crime record via
the Gaussian-Bernoulli Restricted Boltzmann Machines (RBM). This is a
drastically different approach from prior work on crime analysis, which
typically considers only time and location and at most category information.
After the embedding, related cases are closer to each other in the Euclidean
feature space, and the unrelated cases are far apart, which is a good property
can enable subsequent analysis such as detection and clustering of related
cases. Experiments over several series of related crime incidents hand labeled
by the Atlanta Police Department reveal the promise of our embedding methods.Comment: 5 pages, 3 figure
- …