10,153 research outputs found

    Incremental Training of a Detector Using Online Sparse Eigen-decomposition

    Full text link
    The ability to efficiently and accurately detect objects plays a very crucial role for many computer vision tasks. Recently, offline object detectors have shown a tremendous success. However, one major drawback of offline techniques is that a complete set of training data has to be collected beforehand. In addition, once learned, an offline detector can not make use of newly arriving data. To alleviate these drawbacks, online learning has been adopted with the following objectives: (1) the technique should be computationally and storage efficient; (2) the updated classifier must maintain its high classification accuracy. In this paper, we propose an effective and efficient framework for learning an adaptive online greedy sparse linear discriminant analysis (GSLDA) model. Unlike many existing online boosting detectors, which usually apply exponential or logistic loss, our online algorithm makes use of LDA's learning criterion that not only aims to maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions. We provide a better alternative for online boosting algorithms in the context of training a visual object detector. We demonstrate the robustness and efficiency of our methods on handwriting digit and face data sets. Our results confirm that object detection tasks benefit significantly when trained in an online manner.Comment: 14 page

    Incremental Linear Discriminant analysis for classification of Data Streams

    Get PDF
    This paper presents a constructive method for deriving an updated discriminant eigenspace for classification when bursts of data that contains new classes is being added to an initial discriminant eigenspace in the form of random chunks. Basically, we propose an incremental linear discriminant analysis (ILDA) in its two forms: a sequential ILDA and a Chunk ILDA. In experiments, we have tested ILDA using datasets with a small number of classes and small-dimensional features, as well as datasets with a large number of classes and large-dimensional features. We have compared the proposed ILDA against the traditional batch LDA in terms of discriminability, execution time and memory usage with the increasing volume of data addition. The results show that the proposed ILDA can effectively evolve a discriminant eigenspace over a fast and large data stream, and extract features with superior discriminability in classification, when compared with other methods. © 2005 IEEE

    Online Person Identification based on Multitask Learning

    Get PDF
    In the digital world, everything is digitized and data are generated consecutively over the times. To deal with this situation, incremental learning plays an important role. One of the important applications that needs an incremental learning is person identification. On the other hand, password and code are no longer the only way to prevent the unauthorized person to access the information and it tends to be forgotten.  Therefore, biometric characteristics system is introduced to solve the problems. However, recognition based on single biometric may not be effective, thus, multitask learning is needed. To solve the problems, incremental learning is applied for person identification based on multitask learning. Considering that the complete data is not possible to be collected at one time, online learning is adopted to update the system accordingly. Linear Discriminant Analysis (LDA) is used to create a feature space while Incremental LDA (ILDA) is adopted to update LDA. Through multitask learning, not only human faces are trained, but fingerprint images are trained in order to improve the performance. The performance of the system is evaluated by using 50 datasets which includes both male and female datasets. Experimental results demonstrate that the learning time of ILDA is faster than LDA. Apart from that, the learning accuracies are evaluated by using K-Nearest Neighbor (KNN) and achieve more than 80% for most of the simulation results. In the future, the system is suggested to be improved by using better sensor for all the biometrics. Other than that, incremental feature extraction is improved to deal with some other online learning problems

    Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

    Full text link
    Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.Comment: Accepted for publication at the ICCV 2017 Workshop on Vision in Practice on Autonomous Robots (ViPAR

    A Robust Online Method for Face Recognition under Illumination Invariant Conditions

    Get PDF
    In case of incremental inputs to an online face recognition with illumination invariant face samples which maximize the class-separation criterion but also incorporates the asymmetrical property of training data distributions In this paper we alleviate this problem with an incremental learning algorithm to effectively adjust a boosted strong classifier with domain-partitioning weak hypotheses to online samples which adopts a novel approach to efficient estimation of training losses received from offline samples An illumination invariant face representation is obtained by extracting local binary pattern LBP features NIR images The Ada-boost procedure is used to learn a powerful face recognition engine based on the invariant representation We use Incremental linear discriminant analysis ILDA in case of sparse function for active near infrared NIR imaging system that is able to produce face images of good condition regardless of visible lights in the environment accuracy by changes in environmental illumination The experiments show convincing results of our incremental method on challenging face detection in extreme illumination

    Incremental and Decremental Nonparametric Discriminant Analysis for Face Recognition

    Get PDF
    Nonparametric Discriminant Analysis (NDA) possesses inherent advantages over Linear Discriminant Analysis (LDA) such as capturing the boundary structure of samples and avoiding matrix inversion. In this paper, we present a novel method for constructing an updated Nonparametric Discriminant Analysis (NDA) model for face recognition. The proposed method is applicable to scenarios where bursts of data samples are added to the existing model in random chunks. Also, the samples which degrade the performance of the model need to be removed. For both of these problems, we propose incremental NDA (INDA) and decremental NDA (DNDA) respectively. Experimental results on four publicly available datasets viz. AR, PIE, ORL and Yale show the efficacy of the proposed method. Also, the proposed method requires less computation time in comparison to batch NDA which makes it suitable for real time applications

    Temporal Model Adaptation for Person Re-Identification

    Full text link
    Person re-identification is an open and challenging problem in computer vision. Majority of the efforts have been spent either to design the best feature representation or to learn the optimal matching metric. Most approaches have neglected the problem of adapting the selected features or the learned model over time. To address such a problem, we propose a temporal model adaptation scheme with human in the loop. We first introduce a similarity-dissimilarity learning method which can be trained in an incremental fashion by means of a stochastic alternating directions methods of multipliers optimization procedure. Then, to achieve temporal adaptation with limited human effort, we exploit a graph-based approach to present the user only the most informative probe-gallery matches that should be used to update the model. Results on three datasets have shown that our approach performs on par or even better than state-of-the-art approaches while reducing the manual pairwise labeling effort by about 80%
    corecore