114,124 research outputs found

    Different Subspace Classification

    Get PDF
    We introduce the idea of Characteristic Regions to solve a classification problem. By identifying regions in which classes are dense (i.e. many observations) and also relevant (for discrimination) we can characterize the different classes. These Characteristic Regions are used to generate a classification rule. The result can be visualized so the user is provided with an insight into data for an easy interpretation. --

    Different Subspace Classification

    Get PDF
    We introduce the idea of Characteristic Regions to solve a classification problem. By identifying regions in which classes are dense (i.e. many observations) and also relevant (for discrimination) we can characterize the different classes. These Characteristic Regions are used to generate a classification rule. The result can be visualized so the user is provided with an insight into data for an easy interpretation

    Multiple pattern classification by sparse subspace decomposition

    Full text link
    A robust classification method is developed on the basis of sparse subspace decomposition. This method tries to decompose a mixture of subspaces of unlabeled data (queries) into class subspaces as few as possible. Each query is classified into the class whose subspace significantly contributes to the decomposed subspace. Multiple queries from different classes can be simultaneously classified into their respective classes. A practical greedy algorithm of the sparse subspace decomposition is designed for the classification. The present method achieves high recognition rate and robust performance exploiting joint sparsity.Comment: 8 pages, 3 figures, 2nd IEEE International Workshop on Subspace Methods, Workshop Proceedings of ICCV 200

    Comparative study for broadband direction of arrival estimation techniques

    Get PDF
    This paper reviews and compares three different linear algebraic signal subspace techniques for broadband direction of arrival estimation --- (i) the coherent signal subspace approach, (ii) eigenanalysis of the parameterised spatial correlation matrix, and (iii) a polynomial version of the multiple signal classification algorithm. Simulation results comparing the accuracy of these methods are presented

    Cross Language Text Classification via Subspace Co-Regularized Multi-View Learning

    Full text link
    In many multilingual text classification problems, the documents in different languages often share the same set of categories. To reduce the labeling cost of training a classification model for each individual language, it is important to transfer the label knowledge gained from one language to another language by conducting cross language classification. In this paper we develop a novel subspace co-regularized multi-view learning method for cross language text classification. This method is built on parallel corpora produced by machine translation. It jointly minimizes the training error of each classifier in each language while penalizing the distance between the subspace representations of parallel documents. Our empirical study on a large set of cross language text classification tasks shows the proposed method consistently outperforms a number of inductive methods, domain adaptation methods, and multi-view learning methods.Comment: Appears in Proceedings of the 29th International Conference on Machine Learning (ICML 2012

    What Are We Doing to the Children?: An Essay on Juvenile (In)justice

    Get PDF
    The nonlinear conjugate gradients method is a very powerful program in the search for Bayes error optimal linear subspaces for classification problems. In this report, techniques to find linear subspaces where the classification error is minimized are surveyed. Summary statistics models of normal populations are used to form smooth, non-convex objective functions of a linear transformation that reduces the dimensionality. Objective functions that are based on the Mahalanobis or Bhattacharyya distances and that are closely related to the probability of misclassification are derived, as well as their subspace gradients. Different approaches to minimize those objective functions are investigated: Householder and Givens parameterizations as well as steepest descent and conjugate gradient methods. The methods are evaluated on experimental data with respect to convergence rate and subspace classification accuracy
    corecore