611 research outputs found

    The Role of the Superior Order GLCM in the Characterization and Recognition of the Liver Tumors from Ultrasound Images

    Get PDF
    The hepatocellular carcinoma (HCC) is the most frequent malignant liver tumor. It often has a similar visual aspect with the cirrhotic parenchyma on which it evolves and with the benign liver tumors. The golden standard for HCC diagnosis is the needle biopsy, but this is an invasive, dangerous method. We aim to develop computerized,noninvasive techniques for the automatic diagnosis of HCC, based on information obtained from ultrasound images. The texture is an important property of the internal organs tissue, able to provide subtle information about the pathology. We previously defined the textural model of HCC, consisting in the exhaustive set of the relevant textural features, appropriate for HCC characterization and in the specific values of these features. In this work, we analyze the role that the superior order Grey Level Cooccurrence Matrices (GLCM) and the associated parameters have in the improvement of HCC characterization and automatic diagnosis. We also determine the best spatial relations between the pixels that lead to the highest performances, for the third, fifth and seventh order GLCM. The following classes will be considered: HCC, cirrhotic liver parenchyma on which it evolves and benign liver tumors

    Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification

    Get PDF
    We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets). The aim is to find General-Purpose (GP) heterogeneous ensembles (requiring little to no parameter tuning) that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM), even when both kernel selection and SVM parameters are carefully tuned for each dataset

    Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Get PDF
    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes

    Coupling different methods for overcoming the class imbalance problem

    Get PDF
    Many classification problems must deal with imbalanced datasets where one class \u2013 the majority class \u2013 outnumbers the other classes. Standard classification methods do not provide accurate predictions in this setting since classification is generally biased towards the majority class. The minority classes are oftentimes the ones of interest (e.g., when they are associated with pathological conditions in patients), so methods for handling imbalanced datasets are critical. Using several different datasets, this paper evaluates the performance of state-of-the-art classification methods for handling the imbalance problem in both binary and multi-class datasets. Different strategies are considered, including the one-class and dimension reduction approaches, as well as their fusions. Moreover, some ensembles of classifiers are tested, in addition to stand-alone classifiers, to assess the effectiveness of ensembles in the presence of imbalance. Finally, a novel ensemble of ensembles is designed specifically to tackle the problem of class imbalance: the proposed ensemble does not need to be tuned separately for each dataset and outperforms all the other tested approaches. To validate our classifiers we resort to the KEEL-dataset repository, whose data partitions (training/test) are publicly available and have already been used in the open literature: as a consequence, it is possible to report a fair comparison among different approaches in the literature. Our best approach (MATLAB code and datasets not easily accessible elsewhere) will be available at https://www.dei.unipd.it/node/2357

    7—Learning vector quantization

    Get PDF
    The course includes a set of materials collected for the OER project funded by the Colorado OER Council Grant (AY 2019-20).Course materials include sessions for FIN 670.Title supplied by instructor Tianyang Wang.Funded by the Colorado Open Educational Resources (OER) Grant 2018-2019
    • …
    corecore