25 research outputs found

    Tensor Canonical Correlation Analysis for Multi-View Dimension Reduction

    Full text link
    © 2015 IEEE. Canonical correlation analysis (CCA) has proven an effective tool for two-view dimension reduction due to its profound theoretical foundation and success in practical applications. In respect of multi-view learning, however, it is limited by its capability of only handling data represented by two-view features, while in many real-world applications, the number of views is frequently many more. Although the ad hoc way of simultaneously exploring all possible pairs of features can numerically deal with multi-view data, it ignores the high order statistics (correlation information) which can only be discovered by simultaneously exploring all features. Therefore, in this work, we develop tensor CCA (TCCA) which straightforwardly yet naturally generalizes CCA to handle the data of an arbitrary number of views by analyzing the covariance tensor of the different views. TCCA aims to directly maximize the canonical correlation of multiple (more than two) views. Crucially, we prove that the main problem of multi-view canonical correlation maximization is equivalent to finding the best rank-1 approximation of the data covariance tensor, which can be solved efficiently using the well-known alternating least squares (ALS) algorithm. As a consequence, the high order correlation information contained in the different views is explored and thus a more reliable common subspace shared by all features can be obtained. In addition, a non-linear extension of TCCA is presented. Experiments on various challenge tasks, including large scale biometric structure prediction, internet advertisement classification, and web image annotation, demonstrate the effectiveness of the proposed method

    ResumeNet: A Learning-based Framework for Automatic Resume Quality Assessment

    Full text link
    Recruitment of appropriate people for certain positions is critical for any companies or organizations. Manually screening to select appropriate candidates from large amounts of resumes can be exhausted and time-consuming. However, there is no public tool that can be directly used for automatic resume quality assessment (RQA). This motivates us to develop a method for automatic RQA. Since there is also no public dataset for model training and evaluation, we build a dataset for RQA by collecting around 10K resumes, which are provided by a private resume management company. By investigating the dataset, we identify some factors or features that could be useful to discriminate good resumes from bad ones, e.g., the consistency between different parts of a resume. Then a neural-network model is designed to predict the quality of each resume, where some text processing techniques are incorporated. To deal with the label deficiency issue in the dataset, we propose several variants of the model by either utilizing the pair/triplet-based loss, or introducing some semi-supervised learning technique to make use of the abundant unlabeled data. Both the presented baseline model and its variants are general and easy to implement. Various popular criteria including the receiver operating characteristic (ROC) curve, F-measure and ranking-based average precision (AP) are adopted for model evaluation. We compare the different variants with our baseline model. Since there is no public algorithm for RQA, we further compare our results with those obtained from a website that can score a resume. Experimental results in terms of different criteria demonstrate the effectiveness of the proposed method. We foresee that our approach would transform the way of future human resources management.Comment: ICD

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Data Fusion for MaaS: Opportunities and Challenges

    Get PDF
    © 2018 IEEE. Computer Supported Cooperative Work (CSCW) in design is an essential facilitator for the development and implementation of smart cities, where modern cooperative transportation and integrated mobility are highly demanded. Owing to greater availability of different data sources, data fusion problem in intelligent transportation systems (ITS) has been very challenging, where machine learning modelling and approaches are promising to offer an important yet comprehensive solution. In this paper, we provide an overview of the recent advances in data fusion for Mobility as a Service (MaaS), including the basics of data fusion theory and the related machine learning methods. We also highlight the opportunities and challenges on MaaS, and discuss potential future directions of research on the integrated mobility modelling

    An efficient data masking for securing medical data using DNA encoding and chaotic system

    Get PDF
    Data security is utmost important for ubiquitous computing of medical/diagnostic data or images. Along with must consider preserving privacy of patients. Recently, deoxyribose nucleic acid (DNA) sequences and chaotic sequence are jointly used for building efficient data masking model. However, the state-of-art model are not robust against noise and cropping attack (CA). Since in existing model most digits of each pixel are not altered. This work present efficient data masking (EDM) method using chaos and DNA based encryption method for securing health care data. For overcoming research challenges effective bit scrambling method is required. Firstly, this work present an efficient bit scrambling using logistic sine map and pseudorandom sequence using chaotic system. Then, DNA substitution is performed among them to resist against differential attack (DA), statistical attack (SA) and CA. Experiment are conducted on standard considering diverse images. The outcome achieved shows proposed model efficient when compared to existing models
    corecore