6,732 research outputs found

    Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning

    Get PDF
    In this article we present the supervised iterative projections and rotations (s-ipr) algorithm, a method for learning discriminative incoherent subspaces from data. We derive s-ipr as a supervised extension of our previously proposed iterative projections and rotations (ipr) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the ‘iris’ dataset, then by using the resulting features in a classification experiment

    Deep Multi-view Learning to Rank

    Full text link
    We study the problem of learning to rank from multiple information sources. Though multi-view learning and learning to rank have been studied extensively leading to a wide range of applications, multi-view learning to rank as a synergy of both topics has received little attention. The aim of the paper is to propose a composite ranking method while keeping a close correlation with the individual rankings simultaneously. We present a generic framework for multi-view subspace learning to rank (MvSL2R), and two novel solutions are introduced under the framework. The first solution captures information of feature mappings from within each view as well as across views using autoencoder-like networks. Novel feature embedding methods are formulated in the optimization of multi-view unsupervised and discriminant autoencoders. Moreover, we introduce an end-to-end solution to learning towards both the joint ranking objective and the individual rankings. The proposed solution enhances the joint ranking with minimum view-specific ranking loss, so that it can achieve the maximum global view agreements in a single optimization process. The proposed method is evaluated on three different ranking problems, i.e. university ranking, multi-view lingual text ranking and image data ranking, providing superior results compared to related methods.Comment: Published at IEEE TKD

    Comparison between Constrained Mutual Subspace Method and Orthogonal Mutual Subspace Method – From the viewpoint of orthogonalization of subspaces –

    Get PDF
    This paper compares the performances between constrained mutual subspace method (CMSM), orthogonalmutual subspace method (OMSM), and also between their nonlinear extensions, namely kernel CMSM(KCMSM) and kernel OMSM (KOMSM). Although the princeples of the feature extraction in these methods aredifferent, their effectiveness are commonly derived from the orthogonalization of subspace, which is widely used tomeasure the performance of subspace-based methods. CMSM makes the relation between class subspaces similarto orthogonal relation by projecting the class subspaces onto the generalized difference subspaces. KCMSM is alsobased on this projection in the nonlinear feature space. On the other hand, OMSM orthogonalizes class subspacesdirectly by whitening the distribution of the class subspaces. KOMSM also utilizes this orthogonalization method inthe nonlinear feature space. From the experimental results, the performances of both the kernel methods (KCMSMand KOMSM) are found to be very high as compared to their linear methods (CMSM and OMSM) and theirperformances levels are well in the same order in spite of their different principles of orthogonalization
    corecore