2 research outputs found

    Adaptive Conjoint Wavelet-Support Vector Classifiers

    Full text link
    Combined wavelet - large margin classifiers succeed in solving difficult signal classification problems in cases where solely using a large margin classifier like, e.g., the Support Vector Machine may fail. This thesis investigates the problem of conjointly designing both classifier stages to achieve a most effective classifier architecture. Particularly, the wavelet features should be adapted to the Support Vector classifier and the specific classification problem. Three different approaches to achieve this goal are considered: The classifier performance is seriously affected by the wavelet or filter used for feature extraction. To optimally choose this wavelet with respect to the subsequent Support Vector classification, appropriate criteria may be used. The radius - margin Support Vector Machine error bound is proven to be computable by two standard Support Vector problems. Criteria which are computationally still more efficient may be sufficient for filter adaptation. For the classification by a Support Vector Machine, several criteria are examined rating feature sets obtained from various orthogonal filter banks. An adaptive search algorithm is devised that, once the criterion is fixed, efficiently finds the optimal wavelet filter. To extract shift invariant wavelet features, Kingsbury's dual-tree complex wavelet transform is examined. The dual-tree filter bank construction leads to wavelets with vanishing negative frequency parts. An enhanced transform is established in the frequency domain for standard wavelet filters without special filter design. The translation and rotational invariance is improved compared with the common wavelet transform as shown for various standard wavelet filters. So the framework well applies to adapted signal classification. Wavelet adaptation for signal classification is a special case of feature selection. Feature selection is an important combinatorial optimisation problem in the context of supervised pattern classification. Four novel continuous feature selection approaches directly minimising the classifier performance are presented. In particular, they include linear and nonlinear Support Vector classifiers. The key ideas of the approaches are additional regularisation and embedded nonlinear feature selection. To solve the optimisation problems, difference of convex functions programming which is a general framework for non-convex continuous optimisation is applied. This optimisation framework may also be interesting for other applications and succeeds in robustly solving the problems, and hence, building more powerful feature selection methods
    corecore