2,735 research outputs found

    Self-tuned Visual Subclass Learning with Shared Samples An Incremental Approach

    Full text link
    Computer vision tasks are traditionally defined and evaluated using semantic categories. However, it is known to the field that semantic classes do not necessarily correspond to a unique visual class (e.g. inside and outside of a car). Furthermore, many of the feasible learning techniques at hand cannot model a visual class which appears consistent to the human eye. These problems have motivated the use of 1) Unsupervised or supervised clustering as a preprocessing step to identify the visual subclasses to be used in a mixture-of-experts learning regime. 2) Felzenszwalb et al. part model and other works model mixture assignment with latent variables which is optimized during learning 3) Highly non-linear classifiers which are inherently capable of modelling multi-modal input space but are inefficient at the test time. In this work, we promote an incremental view over the recognition of semantic classes with varied appearances. We propose an optimization technique which incrementally finds maximal visual subclasses in a regularized risk minimization framework. Our proposed approach unifies the clustering and classification steps in a single algorithm. The importance of this approach is its compliance with the classification via the fact that it does not need to know about the number of clusters, the representation and similarity measures used in pre-processing clustering methods a priori. Following this approach we show both qualitatively and quantitatively significant results. We show that the visual subclasses demonstrate a long tail distribution. Finally, we show that state of the art object detection methods (e.g. DPM) are unable to use the tails of this distribution comprising 50\% of the training samples. In fact we show that DPM performance slightly increases on average by the removal of this half of the data.Comment: Updated ICCV 2013 submissio

    Boosting Classifiers for Drifting Concepts

    Get PDF
    This paper proposes a boosting-like method to train a classifier ensemble from data streams. It naturally adapts to concept drift and allows to quantify the drift in terms of its base learners. The algorithm is empirically shown to outperform learning algorithms that ignore concept drift. It performs no worse than advanced adaptive time window and example selection strategies that store all the data and are thus not suited for mining massive streams. --

    Classification of Human Ventricular Arrhythmia in High Dimensional Representation Spaces

    Full text link
    We studied classification of human ECGs labelled as normal sinus rhythm, ventricular fibrillation and ventricular tachycardia by means of support vector machines in different representation spaces, using different observation lengths. ECG waveform segments of duration 0.5-4 s, their Fourier magnitude spectra, and lower dimensional projections of Fourier magnitude spectra were used for classification. All considered representations were of much higher dimension than in published studies. Classification accuracy improved with segment duration up to 2 s, with 4 s providing little improvement. We found that it is possible to discriminate between ventricular tachycardia and ventricular fibrillation by the present approach with much shorter runs of ECG (2 s, minimum 86% sensitivity per class) than previously imagined. Ensembles of classifiers acting on 1 s segments taken over 5 s observation windows gave best results, with sensitivities of detection for all classes exceeding 93%.Comment: 9 pages, 2 tables, 5 figure
    • …
    corecore