12,787 research outputs found

    Statistical Sources of Variable Selection Bias in Classification Tree Algorithms Based on the Gini Index

    Get PDF
    Evidence for variable selection bias in classification tree algorithms based on the Gini Index is reviewed from the literature and embedded into a broader explanatory scheme: Variable selection bias in classification tree algorithms based on the Gini Index can be caused not only by the statistical effect of multiple comparisons, but also by an increasing estimation bias and variance of the splitting criterion when plug-in estimates of entropy measures like the Gini Index are employed. The relevance of these sources of variable selection bias in the different simulation study designs is examined. Variable selection bias due to the explored sources applies to all classification tree algorithms based on empirical entropy measures like the Gini Index, Deviance and Information Gain, and to both binary and multiway splitting algorithms

    Modeling students' background and academic performance with missing values using classification tree

    Get PDF
    Student's academic performance is a prime concern to high level educational institution since it will react the performance of the institution. The difierences in academic performance among students are topics that has drawn interest of many academic researchers and our society. One of the biggest challenges in universities decision making and planning today is to predict the performance of their students at the early stage prior to their admission. We address the application of inferring the degree classification of students using their background data in the dataset obtained from one of the high level educational institutions in Malaysia. We present the results of a detailed statistical analysis relating to the final degree classification obtained at the end of their studies and their backgrounds. Classification tree model produce the highest accuracy in predicting student's degree classification using their background data as compared to Bayesion network and naive Bayes. The significance of the prediction depends closely on the quality of the database and on the chosen sample dataset to be used for model training and testing. Missing values either in predictor or in response variables are a very common problem in statistics and data mining. Cases with missing values are often ignored which results in loss of information and possible bias. Surrogate split in standard classification tree is a possible choice in handling missing values for large dataset contains at most ten percent missing values. However, for dataset contains more than 10 percent missing values, there is an adverse impact on the structure of classification tree and also the accuracy. In this thesis, we propose classification tree with imputation model to handle missing values in dataset. We investigate the application of classification tree, Bayesian network and naive Bayes as the imputation techniques to handle missing values in classification tree model. The investigation includes all three types of missing values machanism; missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). Imputation using classification tree outperform the imputatation using Bayesian network and naive Bayes for all MCAR, MAR and MNAR. We also compare the performance of classification tree with imputation with surrogate splits in classification and regression tree (CART). Fifteen percent of student's background data are eliminated and classification tree with imputation is used to predict student's degree classification. Classification tree with imputation model produces more accurate model as compared to surrogate splits

    Multiclass Cancer Classification by Using Fuzzy Support Vector Machine and Binary Decision Tree With Gene Selection

    Get PDF
    We investigate the problems of multiclass cancer classification with gene selection from gene expression data. Two different constructed multiclass classifiers with gene selection are proposed, which are fuzzy support vector machine (FSVM) with gene selection and binary classification tree based on SVM with gene selection. Using F test and recursive feature elimination based on SVM as gene selection methods, binary classification tree based on SVM with F test, binary classification tree based on SVM with recursive feature elimination based on SVM, and FSVM with recursive feature elimination based on SVM are tested in our experiments. To accelerate computation, preselecting the strongest genes is also used. The proposed techniques are applied to analyze breast cancer data, small round blue-cell tumors, and acute leukemia data. Compared to existing multiclass cancer classifiers and binary classification tree based on SVM with F test or binary classification tree based on SVM with recursive feature elimination based on SVM mentioned in this paper, FSVM based on recursive feature elimination based on SVM can find most important genes that affect certain types of cancer with high recognition accuracy

    Sensitivity of missing values in classification tree for large sample

    Get PDF
    Missing values either in predictor or in response variables are a very common problem in statistics and data mining. Cases with missing values are often ignored which results in loss of information and possible bias. The objectives of our research were to investigate the sensitivity of missing data in classification tree model for large sample. Data were obtained from one of the high level educational institutions in Malaysia. Students' background data were randomly eliminated and classification tree was used to predict students degree classification. The results showed that for large sample, the structure of the classification tree was sensitive to missing values especially for sample contains more than ten percent missing values

    Unraveling the Significance of the Classification Tree Algorithm in Machine Learning: A Literature Review

    Get PDF
    Machine learning, an integral component of Artificial Intelligence (AI), empowers systems to autonomously enhance their performance through experiential learning. This paper presents a comprehensive overview of the Classification Tree Algorithm's pivotal role in the realm of machine learning. This algorithm simplifies the process of categorizing new instances into predefined classes, leveraging their unique attributes. It has firmly established itself as a cornerstone within the broader landscape of classification techniques. This paper delves into the multifaceted concepts, terminologies, principles, and ideas that orbit the Classification Tree Algorithm. It sheds light on the algorithm's essence, providing readers with a clearer and more profound understanding of its inner workings. By synthesizing a plethora of existing research, this endeavor contributes to the enrichment of the discourse surrounding classification tree algorithms. In summary, the Classification Tree Algorithm plays a fundamental role in machine learning, facilitating data classification, and empowering decision-making across domains. Its adaptability, alongside emerging variations and innovative techniques, ensures its continued relevance in the ever-evolving landscape of artificial intelligence and data analysis

    A new restructuring algorithm for the classification-tree method

    Get PDF
    The classification-tree method developed by Grochtmann and Grimm facilitates the identification of test cases from functional specifications via the construction of classification trees. Their method has been enhanced by Chen and Poon through the classification-tree construction and restructuring methodologies. We find, however that the restructuring algorithm by Chen and Poon is applicable only to certain types of classification trees. We introduce a new tree-restructuring algorithm to supplement their work.published_or_final_versio

    Classification Tree Pruning Under Covariate Shift

    Full text link
    We consider the problem of \emph{pruning} a classification tree, that is, selecting a suitable subtree that balances bias and variance, in common situations with inhomogeneous training data. Namely, assuming access to mostly data from a distribution PX,YP_{X, Y}, but little data from a desired distribution QX,YQ_{X, Y} with different XX-marginals, we present the first efficient procedure for optimal pruning in such situations, when cross-validation and other penalized variants are grossly inadequate. Optimality is derived with respect to a notion of \emph{average discrepancy} PXQXP_{X} \to Q_{X} (averaged over XX space) which significantly relaxes a recent notion -- termed \emph{transfer-exponent} -- shown to tightly capture the limits of classification under such a distribution shift. Our relaxed notion can be viewed as a measure of \emph{relative dimension} between distributions, as it relates to existing notions of information such as the Minkowski and Renyi dimensions.Comment: 38 pages, 8 figure
    corecore