3,300 research outputs found

    Incremental Training of a Detector Using Online Sparse Eigen-decomposition

    Full text link
    The ability to efficiently and accurately detect objects plays a very crucial role for many computer vision tasks. Recently, offline object detectors have shown a tremendous success. However, one major drawback of offline techniques is that a complete set of training data has to be collected beforehand. In addition, once learned, an offline detector can not make use of newly arriving data. To alleviate these drawbacks, online learning has been adopted with the following objectives: (1) the technique should be computationally and storage efficient; (2) the updated classifier must maintain its high classification accuracy. In this paper, we propose an effective and efficient framework for learning an adaptive online greedy sparse linear discriminant analysis (GSLDA) model. Unlike many existing online boosting detectors, which usually apply exponential or logistic loss, our online algorithm makes use of LDA's learning criterion that not only aims to maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions. We provide a better alternative for online boosting algorithms in the context of training a visual object detector. We demonstrate the robustness and efficiency of our methods on handwriting digit and face data sets. Our results confirm that object detection tasks benefit significantly when trained in an online manner.Comment: 14 page

    A Robust Online Method for Face Recognition under Illumination Invariant Conditions

    Get PDF
    In case of incremental inputs to an online face recognition with illumination invariant face samples which maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions In this paper we alleviate this problem with an incremental learning algorithm to effectively adjust a boosted strong classifier with domain-partitioning weak hypotheses to online samples which adopts a novel approach to efficient estimation of training losses received from offline samples An illumination invariant face representation is obtained by extracting local binary pattern LBP features NIR images The Ada-boost procedure is used to learn a powerful face recognition engine based on the invariant representation We use Incremental linear discriminant analysis ILDA in case of sparse function for active near infrared NIR imaging system that is able to produce face images of good condition regardless of visible lights in the environment accuracy by changes in environmental illumination The experiments show convincing results of our incremental method on challenging face detection in extreme illumination

    INCREMENTAL AND REGULARIZED LINEAR DISCRIMINANT ANALYSIS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Hierarchical Classification in High Dimensional, Numerous Class Cases

    Get PDF
    As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. HIRIS, the High Resolution Imaging Spectrometer, for example, will gather data simultaneously in 102 spectral bands in the 0.4 - 2.5 micrometer wavelength region at 30 m spatial resolution. AVIRIS, the Airborne Visible and Infrared Imaging Spectrometer, covers the 0.4 - 2.5 micrometer in 224 spectral bands. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. In applying pattern recognition methods to remote sensing problems, an inherent limitation is that there is almost always only a small number of training samples with which to design the classifier. Both the growth in the dimensionality and the number of classes is likely to aggravate the already significant limitation of training samples. Thus ways must be found for future data analysis which can perform effectively in the face of large numbers of classes without unduly aggravating the limitations on training. A set of requirements for a valid list of classes for remote sensing data is that the classes must each be of informational value (i.e. useful in a pragmatic sense) and the classes be spectrally or otherwise separable (i.e., distinguishable based on the available data). Therefore, a means to simultaneously reconcile a property of the data (being separable) and a property of the application (informational value) is important in developing the new approach to classifier design. In this work we propose decision tree classifiers which have the potential to be more efficient and accurate in this situation of high dimensionality and large numbers of classes; In particular, we discuss three methods for designing a decision tree classifier, a top down approach, a bottom up approach, and a hybrid approach. Also, remote sensing systems which perform pattern recognition tasks on high dimensional data with small training sets require efficient methods for feature extraction and prediction of the optimal number of features to achieve minimum classification error. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences, The mathematical relationship between sample size, dimensionality, and risk value is derived. It is shown that the incremental error is simultaneously affected by two factors, dimensionality and separability. For predicting the optimal number of features, it is concluded that in a transformed coordinate space it is best to use the best one feature when only small numbers of samples are available. Empirical results indicate that a reasonable sample size is six to ten times the dimensionality

    Separability-Oriented Subclass Discriminant Analysis

    Get PDF
    corecore