3,507 research outputs found
Recommended from our members
Parallelizing support vector machines for scalable image annotation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them Support Vector Machines (SVMs) are used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large.
In this thesis distributed computing paradigms have been investigated to speed up SVM training, by partitioning a large training dataset into small data chunks and process each chunk in parallel utilizing the resources of a cluster of computers. A resource aware parallel SVM algorithm is introduced for large scale image annotation in parallel using a cluster of computers. A genetic algorithm based load balancing scheme is designed to optimize the performance of the algorithm in heterogeneous computing environments.
SVM was initially designed for binary classifications. However, most classification problems arising in domains such as image annotation usually involve more than two classes. A resource aware parallel multiclass SVM algorithm for large scale image annotation in parallel using a cluster of computers is introduced.
The combination of classifiers leads to substantial reduction of classification error in a wide range of applications. Among them SVM ensembles with bagging is shown to outperform a single SVM in terms of classification accuracy. However, SVM ensembles training are notably a computationally intensive process especially when the number replicated samples based on bootstrapping is large. A distributed SVM ensemble algorithm for image annotation is introduced which re-samples the training data based on bootstrapping and training SVM on each sample in parallel using a cluster of computers.
The above algorithms are evaluated in both experimental and simulation environments showing that the distributed SVM algorithm, distributed multiclass SVM algorithm, and distributed SVM ensemble algorithm, reduces the training time significantly while maintaining a high level of accuracy in classifications
Medical Image Classification via SVM using LBP Features from Saliency-Based Folded Data
Good results on image classification and retrieval using support vector
machines (SVM) with local binary patterns (LBPs) as features have been
extensively reported in the literature where an entire image is retrieved or
classified. In contrast, in medical imaging, not all parts of the image may be
equally significant or relevant to the image retrieval application at hand. For
instance, in lung x-ray image, the lung region may contain a tumour, hence
being highly significant whereas the surrounding area does not contain
significant information from medical diagnosis perspective. In this paper, we
propose to detect salient regions of images during training and fold the data
to reduce the effect of irrelevant regions. As a result, smaller image areas
will be used for LBP features calculation and consequently classification by
SVM. We use IRMA 2009 dataset with 14,410 x-ray images to verify the
performance of the proposed approach. The results demonstrate the benefits of
saliency-based folding approach that delivers comparable classification
accuracies with state-of-the-art but exhibits lower computational cost and
storage requirements, factors highly important for big data analytics.Comment: To appear in proceedings of The 14th International Conference on
Machine Learning and Applications (IEEE ICMLA 2015), Miami, Florida, USA,
201
Statistical Learning Approaches to Information Filtering
Enabling computer systems to understand human thinking or
behaviors has ever been an exciting challenge to computer
scientists. In recent years one such a topic, information
filtering, emerges to help users find desired information items (e.g.~movies, books, news) from large amount of available data, and has become crucial in many applications, like product recommendation, image retrieval, spam email filtering, news filtering, and web navigation etc..
An information filtering system must be able to understand users' information needs. Existing approaches either infer a
user's profile by exploring his/her connections to other users, i.e.~collaborative filtering (CF), or analyzing the content descriptions of liked or disliked examples annotated by the user, ~i.e.~content-based filtering (CBF). Those methods work well to some extent, but are facing difficulties due to lack of insights into the problem.
This thesis intensively studies a wide scope of information
filtering technologies. Novel and principled machine
learning methods are proposed to model users' information needs. The work demonstrates that the uncertainty of user profiles and the connections between them can be effectively modelled by using probability theory and Bayes rule. As one major contribution of this thesis, the work clarifies the ``structure'' of information filtering and gives rise to principled solutions. In summary, the work of this thesis mainly covers the following
three aspects:
Collaborative filtering: We develop a probabilistic model for memory-based collaborative filtering (PMCF), which has clear links with classical memory-based CF. Various heuristics to improve memory-based CF have been proposed
in the literature. In contrast, extensions based on PMCF can be made in a principled probabilistic way. With PMCF, we describe a CF paradigm that involves interactions with
users, instead of passively receiving data from users in conventional CF, and actively chooses the most informative patterns to learn, thereby greatly reduce user efforts and computational costs.
Content-based filtering: One major problem for CBF is the
deficiency and high dimensionality of content-descriptive
features. Information items (e.g.~images or articles) are typically described by high-dimensional features with mixed types of attributes, that seem to be developed independently but intrinsically related. We derive a generalized principle component analysis to merge high-dimensional and heterogenous content features into a low-dimensional continuous latent space. The derived features brings great conveniences to CBF, because most existing algorithms easily cope with low-dimensional and continuous data, and more importantly, the extracted data highlight the intrinsic semantics of original content features.
Hybrid filtering: How to combine CF and CBF in an ``smart'' way remains one of the most challenging problems in information filtering. Little principled work exists so far. This thesis reveals that people's information needs can be naturally modelled with a hierarchical Bayesian thinking, where each individual's data are generated based on his/her own profile model, which itself is a sample from a common distribution of the population of user profiles. Users are thus connected to each other via this common distribution. Due to the complexity of such a distribution in real-world applications, usually applied parametric models are too restrictive, and we thus introduce a nonparametric hierarchical Bayesian model using Dirichlet process. We derive effective and efficient algorithms to learn the described model. In particular, the finally achieved hybrid filtering methods are surprisingly simple and intuitively understandable, offering clear insights to previous work on pure CF, pure CBF, and hybrid filtering
EnTri: Ensemble Learning with Tri-level Representations for Explainable Scene Recognition
Scene recognition based on deep-learning has made significant progress, but
there are still limitations in its performance due to challenges posed by
inter-class similarities and intra-class dissimilarities. Furthermore, prior
research has primarily focused on improving classification accuracy, yet it has
given less attention to achieving interpretable, precise scene classification.
Therefore, we are motivated to propose EnTri, an ensemble scene recognition
framework that employs ensemble learning using a hierarchy of visual features.
EnTri represents features at three distinct levels of detail: pixel-level,
semantic segmentation-level, and object class and frequency level. By
incorporating distinct feature encoding schemes of differing complexity and
leveraging ensemble strategies, our approach aims to improve classification
accuracy while enhancing transparency and interpretability via visual and
textual explanations. To achieve interpretability, we devised an extension
algorithm that generates both visual and textual explanations highlighting
various properties of a given scene that contribute to the final prediction of
its category. This includes information about objects, statistics, spatial
layout, and textural details. Through experiments on benchmark scene
classification datasets, EnTri has demonstrated superiority in terms of
recognition accuracy, achieving competitive performance compared to
state-of-the-art approaches, with an accuracy of 87.69%, 75.56%, and 99.17% on
the MIT67, SUN397, and UIUC8 datasets, respectively.Comment: Submitted to Pattern Recognition journa
- …