99 research outputs found
Asymmetric Totally-corrective Boosting for Real-time Object Detection
Real-time object detection is one of the core problems in computer vision.
The cascade boosting framework proposed by Viola and Jones has become the
standard for this problem. In this framework, the learning goal for each node
is asymmetric, which is required to achieve a high detection rate and a
moderate false positive rate. We develop new boosting algorithms to address
this asymmetric learning problem. We show that our methods explicitly optimize
asymmetric loss objectives in a totally corrective fashion. The methods are
totally corrective in the sense that the coefficients of all selected weak
classifiers are updated at each iteration. In contract, conventional boosting
like AdaBoost is stage-wise in that only the current weak classifier's
coefficient is updated. At the heart of the totally corrective boosting is the
column generation technique. Experiments on face detection show that our
methods outperform the state-of-the-art asymmetric boosting methods.Comment: 14 pages, published in Asian Conf. Computer Vision 201
Totally Corrective Multiclass Boosting with Binary Weak Learners
In this work, we propose a new optimization framework for multiclass boosting
learning. In the literature, AdaBoost.MO and AdaBoost.ECC are the two
successful multiclass boosting algorithms, which can use binary weak learners.
We explicitly derive these two algorithms' Lagrange dual problems based on
their regularized loss functions. We show that the Lagrange dual formulations
enable us to design totally-corrective multiclass algorithms by using the
primal-dual optimization technique. Experiments on benchmark data sets suggest
that our multiclass boosting can achieve a comparable generalization capability
with state-of-the-art, but the convergence speed is much faster than stage-wise
gradient descent boosting. In other words, the new totally corrective
algorithms can maximize the margin more aggressively.Comment: 11 page
Totally corrective boosting algorithm and application to face recognition
Boosting is one of the most well-known learning methods for building highly accurate classifiers or regressors from a set of weak classifiers. Much effort has been devoted to the understanding of boosting algorithms. However, questions remain unclear about the success of boosting.
In this thesis, we study boosting algorithms from a new perspective. We started our research by empirically comparing the LPBoost and AdaBoost algorithms. The result and the corresponding analysis show that, besides the minimum margin, which is directly and globally optimized in LPBoost, the margin distribution plays a more important role. Inspired by this observation, we theoretically prove that the Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with generalized hinge loss are all entropy maximization problems. By looking at the dual problems of these boosting algorithms, we show that the success of boosting algorithms can be understood in terms of maintaining a better margin distribution by maximizing margins and at the same time controlling the margin variance. We further point out that AdaBoost approximately maximizes the average margin, instead of the minimum margin. The duality formulation also enables us to develop column-generation based optimization algorithms, which are totally corrective. The new algorithm, which is termed AdaBoost-CG, exhibits almost identical classification results to those of standard stage-wise additive boosting algorithms, but with much faster convergence rates. Therefore, fewer weak classifiers are needed to build the ensemble using our proposed optimization technique.
The significance of margin distribution motivates us to design a new column-generation based algorithm that directly maximizes the average margin while minimizes the margin variance at the same time. We term this novel method MDBoost and show its superiority over other boosting-like algorithms. Moreover, consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. We then propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed AnyBoostTc, can accommodate various loss functions and different regularizers in a totally corrective optimization way. A large body of totally corrective boosting algorithms can actually be solved very efficiently, and no sophisticated convex optimization solvers are needed, by solving the primal rather than the dual. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework, even their optimization is not totally corrective, .
We conclude our study by applying the totally corrective boosting algorithm to a long-standing computer vision problem-face recognition. Linear regression face recognizers, constrained by two categories of locality, are selected and combined within both the traditional and totally corrective boosting framework. To our knowledge, it is the first time that linear-representation classifiers are boosted for face recognition. The instance-based weak classifiers bring some advantages, which are theoretically or empirically proved in our work. Benefiting from the robust weak learner and the advanced learning framework, our algorithms achieve the best reported recognition rates on face recognition benchmark datasets
Fast and robust object detection using asymmetric totally-corrective boosting
Boosting based object detection has received significant attention recently. In this work, we propose totally-corrective asymmetric boosting algorithms for real-time object detection. Our algorithms differ from Viola-Jones’ detection framework in two folds. Firstly, our boosting algorithms explicitly optimize asymmetric loss of objectives, while AdaBoost used by Viola and Jones optimizes a symmetric loss. Secondly, by carefully deriving the Lagrange duals of the optimization problems, we design more efficient boosting in that the coefficients of the selected weak classifiers are updated in a totally-corrective fashion, in contrast to the stage-wise optimization commonly used by most boosting algorithms. Column generation is employed to solve the proposed optimization problems. Unlike conventional boosting, the proposed boosting algorithms are able to de-select those irrelevant weak classifiers in the ensemble while training a classification cascade. This results in improved detection performance as well as fewer weak classifiers in the learned strong classifier. Compared with AsymBoost of Viola and Jones [1], our proposed asymmetric boosting is non-heuristic and the training procedure is much simpler. Experiments on face and pedestrian detection demonstrate that our methods have superior detection performance than some of the state-of-the-art object detectors.Peng Wang, Chunhua Shen, Nick Barnes, and Hong Zhenghttp://ieee-cis.org/pubs/tnn
On the Dual Formulation of Boosting Algorithms
We study boosting algorithms from a new perspective. We show that the
Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with
generalized hinge loss are all entropy maximization problems. By looking at the
dual problems of these boosting algorithms, we show that the success of
boosting algorithms can be understood in terms of maintaining a better margin
distribution by maximizing margins and at the same time controlling the margin
variance.We also theoretically prove that, approximately, AdaBoost maximizes
the average margin, instead of the minimum margin. The duality formulation also
enables us to develop column generation based optimization algorithms, which
are totally corrective. We show that they exhibit almost identical
classification results to that of standard stage-wise additive boosting
algorithms but with much faster convergence rates. Therefore fewer weak
classifiers are needed to build the ensemble using our proposed optimization
technique.Comment: 16 pages. To publish/Published in IEEE Transactions on Pattern
Analysis and Machine Intelligence, 201
Down syndrome detection using modified adaboost algorithm
In human body genetic codes are stored in the genes. All of our inherited traits are associated with these genes and are grouped as structures generally called chromosomes. In typical cases, each cell consists of 23 pairs of chromosomes, out of which each parent contributes half. But if a person has a partial or full copy of chromosome 21, the situation is called Down syndrome. It results in intellectual disability, reading impairment, developmental delay, and other medical abnormalities. There is no specific treatment for Down syndrome. Thus, early detection and screening of this disability are the best styles for down syndrome prevention. In this work, recognition of Down syndrome utilizes a set of facial expression images. Solid geometric descriptor is employed for extracting the facial features from the image set. An AdaBoost method is practiced to gather the required data sets and for the categorization. The extracted information is then assigned and used to instruct the Neural Network using Backpropagation algorithm. This work recorded that the presented model meets the requirement with 98.67% accuracy
Pattern Recognition Using AdaBoost
V této práci se zaobírá algoritmem AdaBoost, který slouží k vytvoření silné klasifikační funkce pomocí několika slabých klasifikátorů. Seznámíme se taktéž s modifikacemi AdaBoostu, a to Real AdaBoostem, WaldBoostem, FloatBoostem a TCAcu. Tyto modifikace zlepšují některé z vlastností algoritmu AdaBoost. Probereme některé vlastnosti příznaků a slabých klasifikátorů. Ukážeme si třídu úloh, pro které je algoritmus AdaBoost použitelný. Popíšeme implementaci knihovny obsahující zmíněné metody a uvedeme některé testy provedené na implementované knihovně.This paper deals about AdaBoost algorithm, which is used to create a strong classification function using a number of weak classifiers. We familiarize ourselves with modifications of AdaBoost, namely Real AdaBoost, WaldBoost, FloatBoost and TCAcu. These modifications improve some of the properties of algorithm AdaBoost. We discuss some properties of feature and weak classifiers. We show a class of tasks for which AdaBoost algorithm is applicable. We indicate implementation of the library containing that method and we present some tests performed on the implemented library.
Computer Graphics and Video Features for Speaker Recognition
Tato práce popisuje netradiční metodu rozpoznávání řečníka pomocí příznaků a alogoritmů používaných převážně v počítačovém vidění. V úvodu jsou shrnuty potřebné teoretické znalosti z oblasti počítačového rozpoznávání. Jako aplikace grafických příznaků v rozpoznávání řečníka jsou detailněji popsány již známé BBF příznaky. Tyto jsou vyhodnoceny nad standardními řečovými databázemi TIMIT a NIST SRE 2010. Experimentální výsledky jsou shrnuty a porovnány se standardními metodami. V závěru jsou jsou navrženy možné směry budoucí práce.We describe a non-traditional method for speaker recognition that uses features and algorithms used mainly for computer vision. Important theoretical knowledge of computer recognition is summarized first. The Boosted Binary Features are described and explored as an already proposed method, that has roots in computer vision. This method is evaluated on standard speaker recognition databases TIMIT and NIST SRE 2010. Experimental results are given and compared to standard methods. Possible directions for future work are proposed at the end.
- …