35,712 research outputs found

    From Projection Pursuit and CART to Adaptive Discriminant Analysis

    Get PDF
    Abstract—While many efforts have been put into the development of nonlinear approximation theory and its applications to signal and image compression, encoding and denoising, there seems to be very few theoretical developments of adaptive discriminant representations in the area of feature extraction, selection and signal classification. In this paper, we try to advocate the idea that such developments and efforts are worthwhile, based on the theorerical study of a data-driven discriminant analysis method on a simple—yet instructive—example. We consider the problem of classifying a signal drawn from a mixture of two classes, using its projections onto low-dimensional subspaces. Unlike the linear discriminant analysis (LDA) strategy, which selects subspaces that do not depend on the observed signal, we consider an adaptive sequential selection of projections, in the spirit of nonlinear approximation and classification and regression trees (CART): at each step, the subspace is enlarged in a direction that maximizes the mutual information with the unknown class. We derive explicit characterizations of this adaptive discriminant analysis (ADA) strategy in two situations. When the two classes are Gaussian with the same covariance matrix but different means, the adaptive subspaces are actually nonadaptive and can be computed with an algorithm similar to orthonormal matching pursuit. When the classes are centered Gaussians with different covariances, the adaptive subspaces are spanned by eigen-vectors of an operator given by the covariance matrices (just as could be predicted by regular LDA), however we prove that the order of observation of the components along these eigen-vectors actually depends on the observed signal. Numerical experiments on synthetic data illustrate how data-dependent features can be used to outperform LDA on a classification task, and we discuss how our results could be applied in practice. Index Terms—Classification and regression trees (CART), classification tree, discriminant analysis, mutual information, nonlinear approximation, projection pursuit, sequential testing. I

    A Family of Maximum Margin Criterion for Adaptive Learning

    Full text link
    In recent years, pattern analysis plays an important role in data mining and recognition, and many variants have been proposed to handle complicated scenarios. In the literature, it has been quite familiar with high dimensionality of data samples, but either such characteristics or large data have become usual sense in real-world applications. In this work, an improved maximum margin criterion (MMC) method is introduced firstly. With the new definition of MMC, several variants of MMC, including random MMC, layered MMC, 2D^2 MMC, are designed to make adaptive learning applicable. Particularly, the MMC network is developed to learn deep features of images in light of simple deep networks. Experimental results on a diversity of data sets demonstrate the discriminant ability of proposed MMC methods are compenent to be adopted in complicated application scenarios.Comment: 14 page

    Target Contrastive Pessimistic Discriminant Analysis

    Full text link
    Domain-adaptive classifiers learn from a source domain and aim to generalize to a target domain. If the classifier's assumptions on the relationship between domains (e.g. covariate shift) are valid, then it will usually outperform a non-adaptive source classifier. Unfortunately, it can perform substantially worse when its assumptions are invalid. Validating these assumptions requires labeled target samples, which are usually not available. We argue that, in order to make domain-adaptive classifiers more practical, it is necessary to focus on robust methods; robust in the sense that the model still achieves a particular level of performance without making strong assumptions on the relationship between domains. With this objective in mind, we formulate a conservative parameter estimator that only deviates from the source classifier when a lower or equal risk is guaranteed for all possible labellings of the given target samples. We derive the corresponding estimator for a discriminant analysis model, and show that its risk is actually strictly smaller than that of the source classifier. Experiments indicate that our classifier outperforms state-of-the-art classifiers for geographically biased samples.Comment: 9 pages, no figures, 2 tables. arXiv admin note: substantial text overlap with arXiv:1706.0808

    Centrifugal instability of Stokes layers in crossflow: the case of a forced cylinder wake

    Get PDF
    The wake flow around a circular cylinder at Re≈100Re\approx100 performing rotatory oscillations has been thoroughly discussed in the literature, mostly focusing on the modifications to the natural B\'enard-von K\'arm\'an vortex street that result from the forced shedding modes locked to the rotatory oscillation frequency. The usual experimental and theoretical frameworks at these Reynolds numbers are quasi-two-dimensional, since the secondary instabilities bringing a three-dimensional structure to the cylinder wake flow occur only at higher Reynolds numbers. In the present paper we show that a three-dimensional structure can appear below the usual three-dimensionalization threshold, when forcing with frequencies lower than the natural vortex shedding frequency, at high amplitudes, as a result of a previously unreported mechanism: a pulsed centrifugal instability of the oscillating Stokes layer at the wall of the cylinder. The present numerical investigation lets us in this way propose a physical explanation for the turbulence-like features reported in the recent experimental study of D'Adamo et al. (2011).Comment: 18 pages, 13 figures. To appear in Proc. Roy. Soc. A. For supplementary video material, see http://vimeo.com/12315202

    Localized Regression

    Get PDF
    The main problem with localized discriminant techniques is the curse of dimensionality, which seems to restrict their use to the case of few variables. This restriction does not hold if localization is combined with a reduction of dimension. In particular it is shown that localization yields powerful classifiers even in higher dimensions if localization is combined with locally adaptive selection of predictors. A robust localized logistic regression (LLR) method is developed for which all tuning parameters are chosen dataÂĄadaptively. In an extended simulation study we evaluate the potential of the proposed procedure for various types of data and compare it to other classification procedures. In addition we demonstrate that automatic choice of localization, predictor selection and penalty parameters based on cross validation is working well. Finally the method is applied to real data sets and its real world performance is compared to alternative procedures
    • 

    corecore