12 research outputs found

    Binarized support vector machines

    Get PDF
    The widely used Support Vector Machine (SVM) method has shown to yield very good results in Supervised Classification problems. Other methods such as Classification Trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in Data Mining. In this work, we propose an SVM-based method that automatically detects the most important predictor variables, and the role they play in the classifier. In particular, the proposed method is able to detect those values and intervals which are critical for the classification. The method involves the optimization of a Linear Programming problem, with a large number of decision variables. The numerical experience reported shows that a rather direct use of the standard Column-Generation strategy leads to a classification method which, in terms of classification ability, is competitive against the standard linear SVM and Classification Trees. Moreover, the proposed method is robust, i.e., it is stable in the presence of outliers and invariant to change of scale or measurement units of the predictor variables. When the complexity of the classifier is an important issue, a wrapper feature selection method is applied, yielding simpler, still competitive, classifiers

    Binarized support vector machines

    Get PDF
    The widely used Support Vector Machine (SVM) method has shown to yield very good results in Supervised Classification problems. Other methods such as Classification Trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in Data Mining. In this work, we propose an SVM-based method that automatically detects the most important predictor variables, and the role they play in the classifier. In particular, the proposed method is able to detect those values and intervals which are critical for the classification. The method involves the optimization of a Linear Programming problem, with a large number of decision variables. The numerical experience reported shows that a rather direct use of the standard Column-Generation strategy leads to a classification method which, in terms of classification ability, is competitive against the standard linear SVM and Classification Trees. Moreover, the proposed method is robust, i.e., it is stable in the presence of outliers and invariant to change of scale or measurement units of the predictor variables. When the complexity of the classifier is an important issue, a wrapper feature selection method is applied, yielding simpler, still competitive, classifiers.Supervised classification, Binarization, Column generation, Support vector machines

    Visual Understanding via Multi-Feature Shared Learning with Global Consistency

    Full text link
    Image/video data is usually represented with multiple visual features. Fusion of multi-source information for establishing the attributes has been widely recognized. Multi-feature visual recognition has recently received much attention in multimedia applications. This paper studies visual understanding via a newly proposed l_2-norm based multi-feature shared learning framework, which can simultaneously learn a global label matrix and multiple sub-classifiers with the labeled multi-feature data. Additionally, a group graph manifold regularizer composed of the Laplacian and Hessian graph is proposed for better preserving the manifold structure of each feature, such that the label prediction power is much improved through the semi-supervised learning with global label consistency. For convenience, we call the proposed approach Global-Label-Consistent Classifier (GLCC). The merits of the proposed method include: 1) the manifold structure information of each feature is exploited in learning, resulting in a more faithful classification owing to the global label consistency; 2) a group graph manifold regularizer based on the Laplacian and Hessian regularization is constructed; 3) an efficient alternative optimization method is introduced as a fast solver owing to the convex sub-problems. Experiments on several benchmark visual datasets for multimedia understanding, such as the 17-category Oxford Flower dataset, the challenging 101-category Caltech dataset, the YouTube & Consumer Videos dataset and the large-scale NUS-WIDE dataset, demonstrate that the proposed approach compares favorably with the state-of-the-art algorithms. An extensive experiment on the deep convolutional activation features also show the effectiveness of the proposed approach. The code is available on http://www.escience.cn/people/lei/index.htmlComment: 13 pages,6 figures, this paper is accepted for publication in IEEE Transactions on Multimedi

    MKBoost: A framework of multiple kernel boosting

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier 1, Tier

    A Linear-RBF Multikernel SVM to Classify Big Text Corpora

    Get PDF

    Fully corrective boosting with arbitrary loss and regularization

    Get PDF
    We propose a general framework for analyzing and developing fully corrective boosting-based classifiers. The framework accepts any convex objective function, and allows any convex (for example, lp-norm, p ≥ 1) regularization term. By placing the wide variety of existing fully corrective boosting-based classifiers on a common footing, and considering the primal and dual problems together, the framework allows direct com- parison between apparently disparate methods. By solving the primal rather than the dual the framework is capable of generating efficient fully-corrective boosting algorithms without recourse to sophisticated convex optimization processes. We show that a range of additional boosting-based algorithms can be incorporated into the framework despite not being fully corrective. Finally, we provide an empirical analysis of the per- formance of a variety of the most significant boosting-based classifiers on a few machine learning benchmark datasets.Chunhua Shen, Hanxi Li, Anton van den Henge

    Totally corrective boosting algorithm and application to face recognition

    Get PDF
    Boosting is one of the most well-known learning methods for building highly accurate classifiers or regressors from a set of weak classifiers. Much effort has been devoted to the understanding of boosting algorithms. However, questions remain unclear about the success of boosting. In this thesis, we study boosting algorithms from a new perspective. We started our research by empirically comparing the LPBoost and AdaBoost algorithms. The result and the corresponding analysis show that, besides the minimum margin, which is directly and globally optimized in LPBoost, the margin distribution plays a more important role. Inspired by this observation, we theoretically prove that the Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We further point out that AdaBoost approximately maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation based optimization algorithms, which are totally corrective. The new algorithm, which is termed AdaBoost-CG, exhibits almost identical classification results to those of standard stage-wise additive boosting algorithms, but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique. The significance of margin distribution motivates us to design a new column-generation based algorithm that directly maximizes the average margin while minimizes the margin variance at the same time. We term this novel method MDBoost and show its superiority over other boosting-like algorithms. Moreover, consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. We then propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed AnyBoostTc, can accommodate various loss functions and different regularizers in a totally corrective optimization way. A large body of totally corrective boosting algorithms can actually be solved very efficiently, and no sophisticated convex optimization solvers are needed, by solving the primal rather than the dual. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework, even their optimization is not totally corrective, . We conclude our study by applying the totally corrective boosting algorithm to a long-standing computer vision problem-face recognition. Linear regression face recognizers, constrained by two categories of locality, are selected and combined within both the traditional and totally corrective boosting framework. To our knowledge, it is the first time that linear-representation classifiers are boosted for face recognition. The instance-based weak classifiers bring some advantages, which are theoretically or empirically proved in our work. Benefiting from the robust weak learner and the advanced learning framework, our algorithms achieve the best reported recognition rates on face recognition benchmark datasets
    corecore