12 research outputs found

    Reduced hyperBF networks : practical optimization, regularization, and applications in bioinformatics.

    Get PDF
    A hyper basis function network (HyperBF) is a generalized radial basis function network (RBF) where the activation function is a radial function of a weighted distance. The local weighting of the distance accounts for the variation in local scaling and discriminative power along each feature. Such generalization makes HyperBF networks capable of interpolating decision functions with high accuracy. However, such complexity makes HyperBF networks susceptible to overfitting. Moreover, training a HyperBF network demands weights, centers and local scaling factors to be optimized simultaneously. In the case of a relatively large dataset with a large network structure, such optimization becomes computationally challenging. In this work, a new regularization method that performs soft local dimension reduction and weight decay is presented. The regularized HyperBF (Reduced HyperBF) network is shown to provide classification accuracy comparable to a Support Vector Machines (SVM) while requiring a significantly smaller network structure. Furthermore, the soft local dimension reduction is shown to be informative for ranking features based on their localized discriminative power. In addition, a practical training approach for constructing HyperBF networks is presented. This approach uses hierarchal clustering to initialize neurons followed by a gradient optimization using a scaled Rprop algorithm with a localized partial backtracking step (iSRprop). Experimental results on a number of datasets show a faster and smoother convergence than the regular Rprop algorithm. The proposed Reduced HyperBF network is applied to two problems in bioinformatics. The first is the detection of transcription start sites (TSS) in human DNA. A novel method for improving the accuracy of TSS recognition for recently published methods is proposed. This method incorporates a new metric feature based on oligonucleotide positional frequencies. The second application is the accurate classification of microarray samples. A new feature selection algorithm based on a Reduced HyperBF network is proposed. The method is applied to two microarray datasets and is shown to select a minimal subset of features with high discriminative information. The algorithm is compared to two widely used methods and is shown to provide competitive results. In both applications, the final Reduced HyperBF network is used for higher level analysis. Significant neurons can indicate subpopulations, while local active features provide insight into the characteristics of the subpopulation in specific and the whole class in general

    Supervised Learning for Binary Classification on US Adult Income

    Get PDF
    In this project, various binary classification methods have been used to make predictions about US adult income level in relation to social factors including age, gender, education, and marital status. We first explore descriptive statistics for the dataset and deal with missing values. After that, we examine some widely used classification methods, including logistic regression, discriminant analysis, support vector machine, random forest, and boosting. Meanwhile, we also provide suitable R functions to demonstrate applications. Various metrics such as ROC curves, accuracy, recall and F-measure are calculated to compare the performance of these models. We find the boosting is the best method in our data analysis due to its highest AUC value and the highest prediction accuracy. In addition, among all predictor variables, we also find three variables that have the largest impact on the US adult income level

    Vote-boosting ensembles

    Full text link
    Vote-boosting is a sequential ensemble learning method in which the individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined in terms of the degree of disagreement among the current ensemble predictions for that instance. For low class-label noise levels, especially when simple base learners are used, emphasis should be made on instances for which the disagreement rate is high. When more flexible classifiers are used and as the noise level increases, the emphasis on these uncertain instances should be reduced. In fact, at sufficiently high levels of class-label noise, the focus should be on instances on which the ensemble classifiers agree. The optimal type of emphasis can be automatically determined using cross-validation. An extensive empirical analysis using the beta distribution as emphasis function illustrates that vote-boosting is an effective method to generate ensembles that are both accurate and robust
    corecore