643 research outputs found

    Automated design of robust discriminant analysis classifier for foot pressure lesions using kinematic data

    Get PDF
    In the recent years, the use of motion tracking systems for acquisition of functional biomechanical gait data, has received increasing interest due to the richness and accuracy of the measured kinematic information. However, costs frequently restrict the number of subjects employed, and this makes the dimensionality of the collected data far higher than the available samples. This paper applies discriminant analysis algorithms to the classification of patients with different types of foot lesions, in order to establish an association between foot motion and lesion formation. With primary attention to small sample size situations, we compare different types of Bayesian classifiers and evaluate their performance with various dimensionality reduction techniques for feature extraction, as well as search methods for selection of raw kinematic variables. Finally, we propose a novel integrated method which fine-tunes the classifier parameters and selects the most relevant kinematic variables simultaneously. Performance comparisons are using robust resampling techniques such as Bootstrap632+632+and k-fold cross-validation. Results from experimentations with lesion subjects suffering from pathological plantar hyperkeratosis, show that the proposed method can lead tosim96sim 96%correct classification rates with less than 10% of the original features

    Determining the number of clusters and distinguishing overlapping clusters in data analysis

    Get PDF
    Le processus de Clustering permet de construire une collection d’objets (clusters) similaires au sein d’un même groupe, et dissimilaires quand ils appartiennent à des groupes différents. Dans cette thèse, on s’intéresse a deux problèmes majeurs d’analyse de données: 1) la détermination automatique du nombre de clusters dans un ensemble de données dont on a aucune information sur les structures qui le composent; 2) le phénomène de recouvrement entre les clusters. La plupart des algorithmes de clustering souffrent du problème de la détermination du nombre de clusters qui est souvent laisse à l’utilisateur. L’approche classique pour déterminer le nombre de clusters est basée sur un processus itératif qui minimise une fonction objectif appelé indice de validité. Notre but est de: 1) développer un nouvel indice de validité pour mesurer la qualité d’une partition, qui est le résultat d’un algorithme de clustering; 2) proposer un nouvel algorithme de clustering flou pour déterminer automatiquement le nombre de clusters. Une application de notre nouvel algorithme est présentée. Elle consiste à la sélection des caractéristiques dans une base de données. Le phénomène de recouvrement entre les clusters est un des problèmes difficile dans la reconnaissance de formes statistiques. La plupart des algorithmes de clustering ont des difficultés à distinguer les clusters qui se chevauchent. Dans cette thèse, on a développé une théorie qui caractérise le phénomène de recouvrement entre les clusters dans un modèle de mélange Gaussien d’une manière formelle. À partir de cette théorie, on a développé un nouvel algorithme qui calcule le degré de recouvrement entre les clusters dans le cas multidimensionnel. Dans ce cadre précis, on a étudié les facteurs qui affectent la valeur théorique du degré de recouvrement. On a démontré comment cette théorie peut être utilisée pour la génération des données de test valides et concrètes pour une évaluation objective des indices de validité pax rapport à leurs capacités à distinguer les clusters qui se chevauchent. Finalement, notre théorie est utilisable dans une application de segmentation des images couleur en utilisant un algorithme de clustering hiérarchique

    Localized Feature Selection For Unsupervised Learning

    Get PDF
    Clustering is the unsupervised classification of data objects into different groups (clusters) such that objects in one group are similar together and dissimilar from another group. Feature selection for unsupervised learning is a technique that chooses the best feature subset for clustering. In general, unsupervised feature selection algorithms conduct feature selection in a global sense by producing a common feature subset for all the clusters. This, however, can be invalid in clustering practice, where the local intrinsic property of data matters more, which implies that localized feature selection is more desirable. In this dissertation, we focus on cluster-wise feature selection for unsupervised learning. We first propose a Cross-Projection method to achieve localized feature selection. The proposed algorithm computes adjusted and normalized scatter separability for individual clusters. A sequential backward search is then applied to find the optimal (perhaps local) feature subsets for each cluster. Our experimental results show the need for feature selection in clustering and the benefits of selecting features locally. We also present another approach based on Maximal Likelihood with Gaussian mixture. We introduce a probabilistic model based on Gaussian mixture. The feature relevance for an individual cluster is treated as a probability, which is represented by localized feature saliency and estimated through Expectation Maximization (EM) algorithm during the clustering process. In addition, the number of clusters is determined by integrating a Minimum Message Length (MML) criterion. Experiments carried out on both synthetic and real-world datasets illustrate the performance of the approach in finding embedded clusters. Another novel approach based on Bayesian framework is successfully implemented. We place prior distributions over the parameters of the Gaussian mixture model, and maximize the marginal log-likelihood given mixing co-efficient and feature saliency. The parameters are estimated by Bayesian Variational Learning. This approach computes the feature saliency for each cluster, and detects the number of clusters simultaneously

    Hierarchical Classification in High Dimensional, Numerous Class Cases

    Get PDF
    As progress in new sensor technology continues, increasingly high resolution imaging sensors are being developed. HIRIS, the High Resolution Imaging Spectrometer, for example, will gather data simultaneously in 102 spectral bands in the 0.4 - 2.5 micrometer wavelength region at 30 m spatial resolution. AVIRIS, the Airborne Visible and Infrared Imaging Spectrometer, covers the 0.4 - 2.5 micrometer in 224 spectral bands. These sensors give more detailed and complex data for each picture element and greatly increase the dimensionality of data over past systems. In applying pattern recognition methods to remote sensing problems, an inherent limitation is that there is almost always only a small number of training samples with which to design the classifier. Both the growth in the dimensionality and the number of classes is likely to aggravate the already significant limitation of training samples. Thus ways must be found for future data analysis which can perform effectively in the face of large numbers of classes without unduly aggravating the limitations on training. A set of requirements for a valid list of classes for remote sensing data is that the classes must each be of informational value (i.e. useful in a pragmatic sense) and the classes be spectrally or otherwise separable (i.e., distinguishable based on the available data). Therefore, a means to simultaneously reconcile a property of the data (being separable) and a property of the application (informational value) is important in developing the new approach to classifier design. In this work we propose decision tree classifiers which have the potential to be more efficient and accurate in this situation of high dimensionality and large numbers of classes; In particular, we discuss three methods for designing a decision tree classifier, a top down approach, a bottom up approach, and a hybrid approach. Also, remote sensing systems which perform pattern recognition tasks on high dimensional data with small training sets require efficient methods for feature extraction and prediction of the optimal number of features to achieve minimum classification error. Three feature extraction techniques are implemented. Canonical and extended canonical techniques are mainly dependent upon the mean difference between two classes. An autocorrelation technique is dependent upon the correlation differences, The mathematical relationship between sample size, dimensionality, and risk value is derived. It is shown that the incremental error is simultaneously affected by two factors, dimensionality and separability. For predicting the optimal number of features, it is concluded that in a transformed coordinate space it is best to use the best one feature when only small numbers of samples are available. Empirical results indicate that a reasonable sample size is six to ten times the dimensionality

    Distances in evidence theory: Comprehensive survey and generalizations

    Get PDF
    AbstractThe purpose of the present work is to survey the dissimilarity measures defined so far in the mathematical framework of evidence theory, and to propose a classification of these measures based on their formal properties. This research is motivated by the fact that while dissimilarity measures have been widely studied and surveyed in the fields of probability theory and fuzzy set theory, no comprehensive survey is yet available for evidence theory. The main results presented herein include a synthesis of the properties of the measures defined so far in the scientific literature; the generalizations proposed naturally lead to additions to the body of the previously known measures, leading to the definition of numerous new measures. Building on this analysis, we have highlighted the fact that Dempster’s conflict cannot be considered as a genuine dissimilarity measure between two belief functions and have proposed an alternative based on a cosine function. Other original results include the justification of the use of two-dimensional indexes as (cosine; distance) couples and a general formulation for this class of new indexes. We base our exposition on a geometrical interpretation of evidence theory and show that most of the dissimilarity measures so far published are based on inner products, in some cases degenerated. Experimental results based on Monte Carlo simulations illustrate interesting relationships between existing measures

    K-means based clustering and context quantization

    Get PDF

    Inverse Projection Representation and Category Contribution Rate for Robust Tumor Recognition

    Full text link
    Sparse representation based classification (SRC) methods have achieved remarkable results. SRC, however, still suffer from requiring enough training samples, insufficient use of test samples and instability of representation. In this paper, a stable inverse projection representation based classification (IPRC) is presented to tackle these problems by effectively using test samples. An IPR is firstly proposed and its feasibility and stability are analyzed. A classification criterion named category contribution rate is constructed to match the IPR and complete classification. Moreover, a statistical measure is introduced to quantify the stability of representation-based classification methods. Based on the IPRC technique, a robust tumor recognition framework is presented by interpreting microarray gene expression data, where a two-stage hybrid gene selection method is introduced to select informative genes. Finally, the functional analysis of candidate's pathogenicity-related genes is given. Extensive experiments on six public tumor microarray gene expression datasets demonstrate the proposed technique is competitive with state-of-the-art methods.Comment: 14 pages, 19 figures, 10 table
    • …
    corecore