14 research outputs found

    Discriminative Hessian Eigenmaps for face recognition

    Get PDF
    Dimension reduction algorithms have attracted a lot of attentions in face recognition because they can select a subset of effective and efficient discriminative features in the face images. Most of dimension reduction algorithms can not well model both the intra-class geometry and interclass discrimination simultaneously. In this paper, we introduce the Discriminative Hessian Eigenmaps (DHE), a novel dimension reduction algorithm to address this problem. DHE will consider encoding the geometric and discriminative information in a local patch by improved Hessian Eigenmaps and margin maximization respectively. Empirical studies on public face database thoroughly demonstrate that DHE is superior to popular algorithms for dimension reduction, e.g., FLDA, LPP, MFA and DLA. ©2010 IEEE.published_or_final_versionThe 2010 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Dallas, TX., 14-19 March 2010. In IEEE International Conference on Acoustics, Speech and Signal Processing Proceedings, 2010, p. 5586-558

    Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction

    Full text link
    It is difficult to find the optimal sparse solution of a manifold learning based dimensionality reduction algorithm. The lasso or the elastic net penalized manifold learning based dimensionality reduction is not directly a lasso penalized least square problem and thus the least angle regression (LARS) (Efron et al. \cite{LARS}), one of the most popular algorithms in sparse learning, cannot be applied. Therefore, most current approaches take indirect ways or have strict settings, which can be inconvenient for applications. In this paper, we proposed the manifold elastic net or MEN for short. MEN incorporates the merits of both the manifold learning based dimensionality reduction and the sparse learning based dimensionality reduction. By using a series of equivalent transformations, we show MEN is equivalent to the lasso penalized least square problem and thus LARS is adopted to obtain the optimal sparse solution of MEN. In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the over-fitting problem, and 5) the projection matrix of MEN can be interpreted psychologically and physiologically. Experimental evidence on face recognition over various popular datasets suggests that MEN is superior to top level dimensionality reduction algorithms.Comment: 33 pages, 12 figure

    Cross-domain web image annotation

    Get PDF
    In recent years, cross-domain learning algorithms have attracted much attention to solve labeled data insufficient problem. However, these cross-domain learning algorithms cannot be applied for subspace learning, which plays a key role in multimedia, e.g., web image annotation. This paper envisions the cross-domain discriminative subspace learning and provides an effective solution to cross-domain subspace learning. In particular, we propose the cross-domain discriminative Hessian Eigenmaps or CDHE for short. CDHE connects the training and the testing samples by minimizing the quadratic distance between the distribution of the training samples and that of the testing samples. Therefore, a common subspace for data representation can be preserved. We basically expect the discriminative information to separate the concepts in the training set can be shared to separate the concepts in the testing set as well and thus we have a chance to address above cross-domain problem duly. The margin maximization is duly adopted in CDHE so the discriminative information for separating different classes can be well preserved. Finally, CDHE encodes the local geometry of each training class in the local tangent space which is locally isometric to the data manifold and thus can locally preserve the intra-class local geometry. Experimental evidence on real world image datasets demonstrates the effectiveness of CDHE for cross-domain web image annotation. © 2009 IEEE.published_or_final_versionThe IEEE International Conference on Data Mining Workshops (ICDMW) 2009, Miami, FL., 6 December 2009. In Proceedings of the IEEE International Conference on Data Mining, 2009, p. 184-18

    Linear discriminant analysis using rotational invariant L-1 norm

    Get PDF
    Linear discriminant analysis (LDA) is a well-known scheme for supervised subspace learning. It has been widely used in the applications of computer vision and pattern recognition. However, an intrinsic limitation of LDA is the sensitivity to the presence of outliers, due to using the Frobenius norm to measure the inter-class and intra-class distances. In this paper, we propose a novel rotational invariant L-1 norm (i.e., R-1 norm) based discriminant criterion (referred to as DCL1), which better characterizes the intra-class compactness and the inter-class separability by using the rotational invariant L-1 norm instead of the Frobenius norm. Based on the DCL1, three subspace learning algorithms (i.e., 1DL(1), 2DL(1), and TDL1) are developed for vector-based, matrix-based, and tensor-based representations of data, respectively. They are capable of reducing the influence of outliers substantially, resulting in a robust classification. Theoretical analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed DCL1 and its algorithms. (C) 2010 Elsevier B.V. All rights reserved

    A Survey on Clustering Techniques for Multi-Valued Data Sets

    Get PDF
    The complexity of the attributes in some particular domain is high when compare to the standard domain, the reason for this is its internal variation and the structure .their representation needs more complex data called multi-valued data which is introduced in this paper. Because of this reason it is needed to extend the data examination techniques (for example characterization, discrimination, association analysis, classification, clustering, outlier analysis, evaluation analysis) to multi-valued data so that we get more exact and consolidated multi-valued data sets. We say that multi-valued data analysis is an expansion of the standard data analysis techniques. The objects of multi-valued data sets are represented by multi-valued attributes and they contain more than one value for one entry in the data base. An example for this type of attribute is 201C;languages known201D; .this attribute may contain more than one value for the corresponding objects because one person may be known more than one language

    Multi-class pairwise linear dimensionality reduction using heteroscedastic schemes

    Get PDF
    Linear dimensionality reduction (LDR) techniques have been increasingly important in pattern recognition (PR) due to the fact that they permit a relatively simple mapping of the problem onto a lower-dimensional subspace, leading to simple and computationally efficient classification strategies. Although the field has been well developed for the two-class problem, the corresponding issues encountered when dealing with multiple classes are far from trivial. In this paper, we argue that, as opposed to the traditional LDR multi-class schemes, if we are dealing with multiple classes, it is not expedient to treat it as a multi-class problem per se. Rather, we shall show that it is better to treat it as an ensemble of Chernoff-based two-class reductions onto different subspaces, whence the overall solution is achieved by resorting to either Voting, Weighting, or to a Decision Tree strategy. The experimental results obtained on benchmark datasets demonstrate that the proposed methods are not only efficient, but that they also yield accuracies comparable to that obtained by the optimal Bayes classifier

    On the application of domain adaptation in structural health monitoring

    Get PDF
    The application of machine learning within Structural Health Monitoring (SHM) has been widely successful in a variety of applications. However, most techniques are built upon the assumption that both training and test data were drawn from the same underlying distribution. This fact means that unless test data were obtained from the same system in the same operating conditions, the machine learning inferences from the training data will not provide accurate predictions when applied to the test data. Therefore, to train a robust predictor conventionally, new training data and labels must be recollected for every new structure considered, which is significantly expensive and often impossible in an SHM context. Transfer learning, in the form of domain adaptation, offers a novel solution to these problems by providing a method for mapping feature and label distributions for different structures, labelled source and unlabelled target structures, onto the same space. As a result, classifiers trained on a labelled structure in the source domain will generalise to a different unlabelled target structure. Furthermore, a holistic discussion of contexts in which domain adaptation is applicable are discussed, specifically for population-based SHM. Three domain adaptation techniques are demonstrated on four case studies providing new frameworks for approaching the problem of SHM

    A Flexible Semi-supervised Feature Extraction Method for Image Classification

    Full text link
    Abstract. This paper proposes a novel discriminant semi-supervised feature extraction for generic classification and recognition tasks. The paper has two main contributions. First, we propose a flexible linear semi-supervised feature extraction method that seeks a non-linear subspace that is close to a linear one. The proposed method is based on a criterion that simultaneously exploits the discrimination information provided by the labeled samples, maintains the graph-based smoothness associated with all samples, regularizes the complexity of the linear transform, and minimizes the discrepancy between the unknown linear regression and the unknown non-linear projection. Second, we provide extensive exper-iments on four benchmark databases in order to study the performance of the proposed method. These experiments demonstrate much improve-ment over the state-of-the-art algorithms that are either based on label propagation or semi-supervised graph-based embedding.
    corecore