57 research outputs found

    Parallelizing support vector machines for scalable image annotation

    Get PDF
    Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them Support Vector Machines (SVMs) are used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. In this thesis distributed computing paradigms have been investigated to speed up SVM training, by partitioning a large training dataset into small data chunks and process each chunk in parallel utilizing the resources of a cluster of computers. A resource aware parallel SVM algorithm is introduced for large scale image annotation in parallel using a cluster of computers. A genetic algorithm based load balancing scheme is designed to optimize the performance of the algorithm in heterogeneous computing environments. SVM was initially designed for binary classifications. However, most classification problems arising in domains such as image annotation usually involve more than two classes. A resource aware parallel multiclass SVM algorithm for large scale image annotation in parallel using a cluster of computers is introduced. The combination of classifiers leads to substantial reduction of classification error in a wide range of applications. Among them SVM ensembles with bagging is shown to outperform a single SVM in terms of classification accuracy. However, SVM ensembles training are notably a computationally intensive process especially when the number replicated samples based on bootstrapping is large. A distributed SVM ensemble algorithm for image annotation is introduced which re-samples the training data based on bootstrapping and training SVM on each sample in parallel using a cluster of computers. The above algorithms are evaluated in both experimental and simulation environments showing that the distributed SVM algorithm, distributed multiclass SVM algorithm, and distributed SVM ensemble algorithm, reduces the training time significantly while maintaining a high level of accuracy in classifications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Resource Aware MapReduce Based Parallel SVM for Large Scale Image Classifications

    Get PDF
    Machine learning techniques have facilitated image retrieval by automatically classifying and annotating images with keywords. Among them support vector machines (SVMs) are used extensively due to their generalization properties. However, SVM training is notably a computationally intensive process especially when the training dataset is large. This paper presents RASMO, a resource aware MapReduce based parallel SVM algorithm for large scale image classifications which partitions the training data set into smaller subsets and optimizes SVM training in parallel using a cluster of computers. A genetic algorithm based load balancing scheme is designed to optimize the performance of RASMO in heterogeneous computing environments. RASMO is evaluated in both experimental and simulation environments. The results show that the parallel SVM algorithm reduces the training time significantly compared with the sequential SMO algorithm while maintaining a high level of accuracy in classifications.National Basic Research Program (973) of China under Grant 2014CB34040

    Automatic Image Annotation Based on Particle Swarm Optimization and Support Vector Clustering

    Get PDF
    With the progress of network technology, there are more and more digital images of the internet. But most images are not semantically marked, which makes it difficult to retrieve and use. In this paper, a new algorithm is proposed to automatically annotate images based on particle swarm optimization (PSO) and support vector clustering (SVC). The algorithm includes two stages: firstly, PSO algorithm is used to optimize SVC; secondly, the trained SVC algorithm is used to annotate the image automatically. In the experiment, three datasets are used to evaluate the algorithm, and the results show the effectiveness of the algorithm

    COMPARATIVE ANALYSIS OF NEURAL NETWORK MODELS FOR THE PROBLEM OF SPEAKER RECOGNITION

    Get PDF
      The subject matter of the article are the neural network models designed or adapted for the problem of voice analysis in the context of the speaker identification and verification tasks. The goal of this work is to perform a comparative analysis of relevant neural network models in order to determine the model(s) that best meet the chosen formulated criteria, – model type, programming language of model’s implementation, parallelizing potential, binary or multiclass, accuracy and computing complexity. Some of these criteria were chosen because of universal importance, regardless of particular application, such as accuracy and computational complexity. Others were chosen due to the architecture and challenges of the scientific communication system mentioned in the work that performs tasks of the speaker identification and verification. The relevance of the paper lies in the prevalence of audio as a communication medium, which results in a wide range of practical applications of audio intelligence in various fields of human activity (business, law, military), as well as in the necessity of enabling and encouraging efficient environment for inward-facing audio-based scientific communication among young scientists in order for them to accelerate their research and to acquire scientific communication skills. To achieve the goal, the following tasks were solved: criteria for models to be judged upon were formulated based on the needs and challenges of the proposed model; the models, designed for the problems of speaker identification and verification, according to formulated criteria were reviewed with the results compiled into a comprehensive table; optimal models were determined in accordance with the formulated criteria. The following neural network based models have been reviewed: SincNet, VGGVox, Jasper, TitaNet, SpeakerNet, ECAPA_TDNN. Conclusions. For the future research and practical solution of the problem of speaker authentication it will be reasonable to use a convolutional neural network implemented in the Python programming language, as it offers a wide variety of development tools and libraries to utilize

    LEARNING WITH MORE DATA AND BETTER MODELS FOR VISUAL SIMILARITY AND DIFFERENTIATION

    Get PDF
    This thesis studies machine learning problems involved in visual recognition on a variety of computer vision tasks. It attacks the challenge of scaling-up learning to efficiently handle more training data in object recognition, more noise in brain activation patterns, and learning more capable visual similarity models. For learning similarity models, one challenge is to capture from data the subtle correlations that preserve the notion of similarity relevant to the task. Most previous work focused on improving feature learning and metric learning separately. Instead, we propose a unified deep-learning modeling framework that jointly optimizes the two through back-propagation. We model the feature mapping using a convolutional neural network and the metric function using a multi-layer fully-connected network. Enabled by large datasets and a sampler to handle the intrinsic imbalance between positive and negative samples, we are able to learn such models efficiently. We apply this approach to patch-based image matching and cross-domain clothing-item matching. For analyzing activation patterns in images acquired using functional Magnetic Resonance Imaging (fMRI), a technology widely used in neuroscience to study human brain, challenges are small number of examples and high level of noise. The common ways of increasing the signal to noise ratio include adding more repetitions, averaging trials, and analyzing statistics maps solved based on a general linear model. In collaboration with neuroscientists, we developed a machine learning approach that allows us to analyze individual trials directly. This approach uses multi-voxel patterns over regions of interest as feature representation, and helps discover effects previous analyses missed. For multi-class object recognition, one challenge is learning a non-one-vs-all multi-class classifier with large numbers of categories each with large numbers of examples. A common approach is data parallelization in a synchronized fashion: evenly and randomly distribute the data into splits, learn a full model on each split and average the models. We reformulate the overall learning problem in a consensus optimization framework and propose a more principled synchronized approach to distributed training. Moreover, we develop an efficient algorithm for solving the sub-problem by reducing it to a standard problem with warm start.Doctor of Philosoph

    Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements

    Get PDF
    Non-functional requirements (NFRs) are regarded critical to a software system's success. The majority of NFR detection and classification solutions have relied on supervised machine learning models. It is hindered by the lack of labelled data for training and necessitate a significant amount of time spent on feature engineering. In this work we explore emerging deep learning techniques to reduce the burden of feature engineering. The goal of this study is to develop an autonomous system that can classify NFRs into multiple classes based on a labelled corpus. In the first section of the thesis, we standardise the NFRs ontology and annotations to produce a corpus based on five attributes: usability, reliability, efficiency, maintainability, and portability. In the second section, the design and implementation of four neural networks, including the artificial neural network, convolutional neural network, long short-term memory, and gated recurrent unit are examined to classify NFRs. These models, necessitate a large corpus. To overcome this limitation, we proposed a new paradigm for data augmentation. This method uses a sort and concatenates strategy to combine two phrases from the same class, resulting in a two-fold increase in data size while keeping the domain vocabulary intact. We compared our method to a baseline (no augmentation) and an existing approach Easy data augmentation (EDA) with pre-trained word embeddings. All training has been performed under two modifications to the data; augmentation on the entire data before train/validation split vs augmentation on train set only. Our findings show that as compared to EDA and baseline, NFRs classification model improved greatly, and CNN outperformed when trained using our suggested technique in the first setting. However, we saw a slight boost in the second experimental setup with just train set augmentation. As a result, we can determine that augmentation of the validation is required in order to achieve acceptable results with our proposed approach. We hope that our ideas will inspire new data augmentation techniques, whether they are generic or task specific. Furthermore, it would also be useful to implement this strategy in other languages
    corecore