6 research outputs found

    Review on the Application of Machine Learning to Cancer Research

    Get PDF
    This study reviews the application of machine learning through different algorithms in cancer research. In recent years, the introduction of machine learning has been an exciting tool that enhances cancer research which has improved statistical method of speeding up both fundamental and applied research considerably. The application of machine learning goes around in predicting the future events and outcomes with the available datasets. There is an indication that on yearly bases up to 14 million new cancer patients are diagnosed by Pathologists round the world, and they are people whose conditions are uncertain. Definitely, the diagnoses and prognoses of cancer have been performed by Pathologists. The research on machine learning flourished in 1980s and 1990s and information become digitalized through improved artificial network connectivity and computational power

    Data-dependent margin-based generalization bounds for classification

    No full text
    We derive new margin-based inequalities for the probability of error of classifiers. The main feature of these bounds is that they can be calculated using the training data and therefore may be effectively used for model selection purposes. In particular, the bounds involve empirical complexities measured on the training data (such as the empirical fat-shattering dimension) as opposed to their worst-case counterparts traditionally used in such analyses. Also, our bounds appear to be sharper and more general than recent results involving empirical complexity measures. In addition, we develop an alternative data-based bound for the generalization error of classes of convex combinations of classifiers involving an empirical complexity measure that is easier to compute than the empirical covering number or fat-shattering dimension. We also show examples of efficient computation of the new bounds

    Data-Dependent Margin-Based Generalization Bounds for Classification

    No full text
    We derive new margin-based inequalities for the probability of error of classifiers. The main feature of these bounds is that they can be calculated using the training data and therefore may be effectively used for model selection purposes. In particular, the bounds involve quantities such as the empirical fat-shattering dimension and covering number measured on the training data, as opposed to their worst-case counterparts traditionally used in such analyses, and appear to be sharper and more general than recent results involving empirical complexity measures. In addition, we also develop an alternative data-based bound for the generalization error of classes of convex combinations of classifiers involving an empirical complexity measure that is more easily computable than the empirical covering number or fat-shattering dimension. We also show an example of efficient computation of the new bounds
    corecore