5,594 research outputs found

    Differential prioritization between relevance and redundancy in correlation-based feature selection techniques for multiclass gene expression data

    Get PDF
    BACKGROUND: Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. RESULTS: We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. CONCLUSION: For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures

    Multi-model fitting based on minimum spanning tree

    Get PDF
    This paper presents a novel approach to the computation of primitive geometrical structures, where no prior knowledge about the visual scene is available and a high level of noise is expected. We based our work on the grouping principles of proximity and similarity, of points and preliminary models. The former was realized using Minimum Spanning Trees (MST), on which we apply a stable alignment and goodness of fit criteria. As for the latter, we used spectral clustering of preliminary models. The algorithm can be generalized to various model fitting settings, without tuning of run parameters. Experiments demonstrate the significant improvement in the localization accuracy of models in plane, homography and motion segmentation examples. The efficiency of the algorithm is not dependent on fine tuning of run parameters like most others in the field

    Towards more reliable feature evaluations for classification

    Get PDF
    In this thesis we study feature subset selection and feature weighting algorithms. Our aim is to make their output more stable and more useful when used to train a classifier. We begin by defining the concept of stability and selecting a measure to asses the output of the feature selection process. Then we study different sources of instability and propose modifications of classic algorithms that improve their stability. We propose a modification of wrapper algorithms that take otherwise unused information into account to overcome an intrinsic source of instability for this algorithms: the feature assessment being a random variable that depends on the particular training subsample. Our version accumulates the evaluation results of each feature at each iteration to average out the effect of the randomness. Another novel proposal is to make wrappers evaluate the remainder set of features at each step to overcome another source of instability: randomness of the algorithms themselves. In this case, by evaluating the non-selected set of features, the initial choice of variables is more educated. These modifications do not bring a great amount of computational overhead and deliver better results, both in terms of stability and predictive power. We finally tackle another source of instability: the differential contribution of the instances to feature assessment. We present a framework to combine almost any instance weighting algorithm with any feature weighting one. Our combination of algorithms deliver more stable results for the various feature weighting algorithms we have tested. Finally, we present a deeper integration of instance weighting with feature weighting by modifying the Simba algorithm, that delivers even better results in terms of stabilityEl focus d'aquesta tesi és mesurar, estudiar i millorar l’estabilitat d’algorismes de selecció de subconjunts de variables (SSV) i avaluació de variables (AV) en un context d'aprenentatge supervisat. El propòsit general de la SSV en un context de classificació és millorar la precisió de la predicció. Nosaltres afirmem que hi ha un altre gran repte en SSV i AV: l’estabilitat des resultats. Un cop triada una mesura d’estabilitat entre les estudiades, proposem millores d’un algorisme molt popular: el Relief. Analitzem diferents mesures de distància a més de la original i estudiem l'efecte que tenen sobre la precisió, la detecció de la redundància i l'estabilitat. També posem a prova diferents maneres d’utilitzar els pesos que es calculen a cada pas per influir en el càlcul de distàncies d’una manera similar a com ho fa un altre algorisme d'AV: el Simba. També millorem la seva estabilitat incrementant la contribució dels pesos de les variables en el càlcul de la distància a mesura que avança el temps per minimitzar l’impacte de la selecció aleatòria de les primeres instàncies. Pel què fa als algorismes embolcall, (wrappers) els modifiquem per tenir en compte informació que era ignorada per superar una font intrínseca d’inestabilitat: el fet que l’avaluació de les variables és una variable aleatòria que depèn del subconjunt de dades utilitzat. La nostra versió acumula els resultats en cada iteració per compensar l’efecte aleatori mentre que els originals descarten tota la informació recollida sobre cada variable en una determinada iteració i comencen de nou a la següent, donant lloc a resultats més inestables. Una altra proposta és fer que aquests wrappers avaluïn el subconjunt de variables no seleccionat en cada iteració per evitar una altra font d’inestabilitat. Aquestes modificacions no comporten un gran augment de cost computacional i els seus resultats són més estables i més útils per un classificador. Finalment proposem ponderar la contribució de cada instància en l’AV. Poden existir observacions atípiques que no s'haurien de tenir tant en compte com les altres; si estem intentant predir un càncer utilitzant informació d’anàlisis genètics, hauríem de donar menys credibilitat a les dades obtingudes de persones exposades a grans nivells de radiació tot i que no tenir informació sobre aquesta exposició. Els mètodes d’avaluació d’instàncies (AI) pretenen identificar aquests casos i assignar-los pesos més baixos. Varis autors han treballat en esquemes d’AI per millorar la SSV però no hi ha treball previ en la combinació d'AI amb AV. Presentem un marc de treball per combinar algorismes d'AI amb altres d'AV. A més proposem un nou algorisme d’AI basat en el concepte de marge de decisió que utilitzen alguns algorismes d’AV. Amb aquest marc de treball hem posat a prova les modificacions contra les versions originals utilitzant varis jocs de dades del repositori UCI, de xips d'ADN i els utilitzats en el desafiament de SSV del NIPS-2003. Les nostres combinacions d'algorismes d'avaluació d'instàncies i atributs ens aporten resultats més estables per varis algorismes d'avaluació d'atributs que hem estudiat. Finalment, presentem una integració més profunda de l'avaluació d'instàncies amb l'algorisme de selecció de variables Simba consistent a utilitzar els pesos de les instàncies per ponderar el càlcul de les distàncies, amb la que obtenim resultats encara millors en termes d’estabilitat. Les contribucions principals d’aquesta tesi son: (i) aportar un marc de treball per combinar l'AI amb l’AV, (ii) una revisió de les mesures d’estabilitat de SSV, (iii) diverses modificacions d’algorismes de SSV i AV que milloren la seva estabilitat i el poder predictiu del subconjunt de variables seleccionats; sense un augment significatiu del seu cost computacional, (iv) una definició teòrica de la importància d'una variable i (v) l'estudi de la relació entre l'estabilitat de la SSV i la redundància de les variables.Postprint (published version

    A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component Analysis

    Get PDF
    Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability. This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation. Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets

    Supervised clustering of genes

    Get PDF
    BACKGROUND: We focus on microarray data where experiments monitor gene expression in different tissues and where each experiment is equipped with an additional response variable such as a cancer type. Although the number of measured genes is in the thousands, it is assumed that only a few marker components of gene subsets determine the type of a tissue. Here we present a new method for finding such groups of genes by directly incorporating the response variables into the grouping process, yielding a supervised clustering algorithm for genes. RESULTS: An empirical study on eight publicly available microarray datasets shows that our algorithm identifies gene clusters with excellent predictive potential, often superior to classification with state-of-the-art methods based on single genes. Permutation tests and bootstrapping provide evidence that the output is reasonably stable and more than a noise artifact. CONCLUSIONS: In contrast to other methods such as hierarchical clustering, our algorithm identifies several gene clusters whose expression levels clearly distinguish the different tissue types. The identification of such gene clusters is potentially useful for medical diagnostics and may at the same time reveal insights into functional genomics

    Holistic interpretation of visual data based on topology:semantic segmentation of architectural facades

    Get PDF
    The work presented in this dissertation is a step towards effectively incorporating contextual knowledge in the task of semantic segmentation. To date, the use of context has been confined to the genre of the scene with a few exceptions in the field. Research has been directed towards enhancing appearance descriptors. While this is unarguably important, recent studies show that computer vision has reached a near-human level of performance in relying on these descriptors when objects have stable distinctive surface properties and in proper imaging conditions. When these conditions are not met, humans exploit their knowledge about the intrinsic geometric layout of the scene to make local decisions. Computer vision lags behind when it comes to this asset. For this reason, we aim to bridge the gap by presenting algorithms for semantic segmentation of building facades making use of scene topological aspects. We provide a classification scheme to carry out segmentation and recognition simultaneously.The algorithm is able to solve a single optimization function and yield a semantic interpretation of facades, relying on the modeling power of probabilistic graphs and efficient discrete combinatorial optimization tools. We tackle the same problem of semantic facade segmentation with the neural network approach.We attain accuracy figures that are on-par with the state-of-the-art in a fully automated pipeline.Starting from pixelwise classifications obtained via Convolutional Neural Networks (CNN). These are then structurally validated through a cascade of Restricted Boltzmann Machines (RBM) and Multi-Layer Perceptron (MLP) that regenerates the most likely layout. In the domain of architectural modeling, there is geometric multi-model fitting. We introduce a novel guided sampling algorithm based on Minimum Spanning Trees (MST), which surpasses other propagation techniques in terms of robustness to noise. We make a number of additional contributions such as measure of model deviation which captures variations among fitted models

    Simultaneous feature selection and classification using kernel-penalized support vector machines

    Get PDF
    a b s t r a c t We introduce an embedded method that simultaneously selects relevant features during classifier construction by penalizing each feature's use in the dual formulation of support vector machines (SVM). This approach called kernel-penalized SVM (KP-SVM) optimizes the shape of an anisotropic RBF Kernel eliminating features that have low relevance for the classifier. Additionally, KP-SVM employs an explicit stopping condition, avoiding the elimination of features that would negatively affect the classifier's performance. We performed experiments on four real-world benchmark problems comparing our approach with well-known feature selection techniques. KP-SVM outperformed the alternative approaches and determined consistently fewer relevant features

    Distributed Spacing Stochastic Feature Selection and its Application to Textile Classification

    Get PDF
    Many situations require the need to quickly and accurately locate dismounted individuals in a variety of environments. In conjunction with other dismount detection techniques, being able to detect and classify clothing (textiles) provides a more comprehensive and complete dismount characterization capability. Because textile classification depends on distinguishing between different material types, hyperspectral data, which consists of several hundred spectral channels sampled from a continuous electromagnetic spectrum, is used as a data source. However, a hyperspectral image generates vast amounts of information and can be computationally intractable to analyze. A primary means to reduce the computational complexity is to use feature selection to identify a reduced set of features that effectively represents a specific class. While many feature selection methods exist, applying them to continuous data results in closely clustered feature sets that offer little redundancy and fail in the presence of noise. This dissertation presents a novel feature selection method that limits feature redundancy and improves classification. This method uses a stochastic search algorithm in conjunction with a heuristic that combines measures of distance and dependence to select features. Comparison testing between the presented feature selection method and existing methods uses hyperspectral data and image wavelet decompositions. The presented method produces feature sets with an average correlation of 0.40-0.54. This is significantly lower than the 0.70-0.99 of the existing feature selection methods. In terms of classification accuracy, the feature sets produced outperform those of other methods, to a significance of 0.025, and show greater robustness under noise representative of a hyperspectral imaging system

    Supervised clustering of genes

    Get PDF
    corecore