19,932 research outputs found

    Support Vector Machines in a real time tracking architecture

    Get PDF
    The standard approach to tracking an object of interest in a video stream is to use an object detector, a classifier and a tracker in sequential order. This work investigates the use of Support Vector Machines (SVM) as classifiers for real-time tracking systems, combining them with Kalman Filter predictors. Support Vector Machines have been proved successful in a variety of classification tasks such as recognizing faces, cars, handwriting and others. However their use has been hampered by the complexity and computational time involved in the training and classification stages. In recent years new methods and techniques for training and classification of Support Vector Machines have been discovered making possible their utilization in real-time applications. These methods have been explored and improved resulting in a framework for fast prototyping and development of real-time tracking systems. New optimal and sub-optimal methods for parallel SVM training based on biased and unbiased versions of the Sequential Minimal Optimization algorithm are presented. They provide a trade-off between time performance and accuracy. Time performance in the classification stage is significantly improved by reducing the number of support vectors with almost no loss in accuracy. New methods to allow the reduction with different kernels are presented. The effectiveness of the approach developed is demonstrated in a face tracking problem where the objective is to track the lips and eyes of a subject in a video stream in real-time

    Support vector machines for image and electronic mail classification

    Get PDF
    Support Vector Machines (SVMs) have demonstrated accuracy and efficiency in a variety of binary classification applications including indoor/outdoor scene categorization of consumer photographs and distinguishing unsolicited commercial electronic mail from legitimate personal communications. This thesis examines a parallel implementation of the Sequential Minimal Optimization (SMO) method of training SVMs resulting in multiprocessor speedup subject to a decrease in accuracy dependent on the data distribution and number of processors. Subsequently the SVM classification system was applied to the image labeling and e-mail classification problems. A parallel implementation of the image classification system\u27s color histogram, color coherence, and edge histogram feature extractors increased performance when using both noncaching and caching data distribution methods. The electronic mail classification application produced an accuracy of 96.69% with a user-generated dictionary. An implementation of the electronic mail classifier as a Microsoft Outlook add-in provides immediate mail filtering capabilities to the average desktop user. While the parallel implementation of the SVM trainer was not supported for the classification applications, the parallel feature extractor improved image classification performance

    An ontology enhanced parallel SVM for scalable spam filter training

    Get PDF
    This is the post-print version of the final paper published in Neurocomputing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Spam, under a variety of shapes and forms, continues to inflict increased damage. Varying approaches including Support Vector Machine (SVM) techniques have been proposed for spam filter training and classification. However, SVM training is a computationally intensive process. This paper presents a MapReduce based parallel SVM algorithm for scalable spam filter training. By distributing, processing and optimizing the subsets of the training data across multiple participating computer nodes, the parallel SVM reduces the training time significantly. Ontology semantics are employed to minimize the impact of accuracy degradation when distributing the training data among a number of SVM classifiers. Experimental results show that ontology based augmentation improves the accuracy level of the parallel SVM beyond the original sequential counterpart

    A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing

    Full text link
    The past years have witnessed many dedicated open-source projects that built and maintain implementations of Support Vector Machines (SVM), parallelized for GPU, multi-core CPUs and distributed systems. Up to this point, no comparable effort has been made to parallelize the Elastic Net, despite its popularity in many high impact applications, including genetics, neuroscience and systems biology. The first contribution in this paper is of theoretical nature. We establish a tight link between two seemingly different algorithms and prove that Elastic Net regression can be reduced to SVM with squared hinge loss classification. Our second contribution is to derive a practical algorithm based on this reduction. The reduction enables us to utilize prior efforts in speeding up and parallelizing SVMs to obtain a highly optimized and parallel solver for the Elastic Net and Lasso. With a simple wrapper, consisting of only 11 lines of MATLAB code, we obtain an Elastic Net implementation that naturally utilizes GPU and multi-core CPUs. We demonstrate on twelve real world data sets, that our algorithm yields identical results as the popular (and highly optimized) glmnet implementation but is one or several orders of magnitude faster.Comment: 10 page

    Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

    Get PDF
    We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementations. Extensions of the theoretical results to general linearly constrained problem are provided. We included numerical results on support vector machines with the aim of showing the viability and the effectiveness of the proposed scheme
    corecore