140 research outputs found

    Novel Diagnostic Algorithm for the Floating and Sunken Pulse Qualities and Its Clinical Test

    Get PDF
    We propose a novel classification algorithm for the floating pulse and the sunken pulse using a newly defined coefficient (Cfs). To examine the validity of the proposed algorithm, we carried out a clinical test in which 12 oriental medical doctors made pairwise diagnoses on the pulses of volunteering subjects. 169 subjects were simultaneously diagnosed by paired doctors, and the diagnoses in 121 subjects were concordant, yielding an accuracy of 72% and a Matthews correlation coefficient of 0.42, which indicates reasonable agreement between doctors. Two sample T-tests showed that subjects in the sunken pulse group had significantly higher BMI and Cfs (P < .05) than those in the floating pulse group. The pulse classification by the algorithm converged with the diagnoses of paired doctors with an accuracy up to 69%. With these results, we confirm the validity of the novel classification algorithm for the floating and sunken pulses

    Wavelet-Based Kernel Construction for Heart Disease Classification

    Get PDF
    © 2019 ADVANCES IN ELECTRICAL AND ELECTRONIC ENGINEERINGHeart disease classification plays an important role in clinical diagnoses. The performance improvement of an Electrocardiogram classifier is therefore of great relevance, but it is a challenging task too. This paper proposes a novel classification algorithm using the kernel method. A kernel is constructed based on wavelet coefficients of heartbeat signals for a classifier with high performance. In particular, a wavelet packet decomposition algorithm is applied to heartbeat signals to obtain the Approximation and Detail coefficients, which are used to calculate the parameters of the kernel. A principal component analysis algorithm with the wavelet-based kernel is employed to choose the main features of the heartbeat signals for the input of the classifier. In addition, a neural network with three hidden layers in the classifier is utilized for classifying five types of heart disease. The electrocardiogram signals in nine patients obtained from the MIT-BIH database are used to test the proposed classifier. In order to evaluate the performance of the classifier, a multi-class confusion matrix is applied to produce the performance indexes, including the Accuracy, Recall, Precision, and F1 score. The experimental results show that the proposed method gives good results for the classification of the five mentioned types of heart disease.Peer reviewedFinal Published versio

    "Virus hunting" using radial distance weighted discrimination

    Get PDF
    Motivated by the challenge of using DNA-seq data to identify viruses in human blood samples, we propose a novel classification algorithm called "Radial Distance Weighted Discrimination" (or Radial DWD). This classifier is designed for binary classification, assuming one class is surrounded by the other class in very diverse radial directions, which is seen to be typical for our virus detection data. This separation of the 2 classes in multiple radial directions naturally motivates the development of Radial DWD. While classical machine learning methods such as the Support Vector Machine and linear Distance Weighted Discrimination can sometimes give reasonable answers for a given data set, their generalizability is severely compromised because of the linear separating boundary. Radial DWD addresses this challenge by using a more appropriate (in this particular case) spherical separating boundary. Simulations show that for appropriate radial contexts, this gives much better generalizability than linear methods, and also much better than conventional kernel based (nonlinear) Support Vector Machines, because the latter methods essentially use much of the information in the data for determining the shape of the separating boundary. The effectiveness of Radial DWD is demonstrated for real virus detection.Comment: Published at http://dx.doi.org/10.1214/15-AOAS869 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Novel classification algorithm for ballistic target based on HRRP frame

    Get PDF
    Nowadays the identification of ballistic missile warheads in a cloud of decoys and debris is essential for defence systems in order to optimize the use of ammunition resources, avoiding to run out of all the available interceptors in vain. This paper introduces a novel solution for the classification of ballistic targets based on the computation of the inverse Radon transform of the target signatures, represented by a high resolution range profile frame acquired within an entire period of the main rotation of the target. Namely, the precession for warheads and the tumbling for decoys are taken into account. Following, the pseudo-Zernike moments of the resulting transformation are evaluated as the final feature vector for the classifier. The extracted features guarantee robustness against target's dimensions and rotation velocity, and the initial phase of the target's motion. The classification results on simulated data are shown for different polarizations of the electromagnetic radar waveform and for various operational conditions, confirming the validity of the algorithm

    Local Approximations, Real Interpolation and Machine Learning

    Full text link
    We suggest a novel classification algorithm that is based on local approximations and explain its connections with Artificial Neural Networks (ANNs) and Nearest Neighbour classifiers. We illustrate it on the datasets MNIST and EMNIST of images of handwritten digits. We use the dataset MNIST to find parameters of our algorithm and apply it with these parameters to the challenging EMNIST dataset. It is demonstrated that the algorithm misclassifies 0.42% of the images of EMNIST and therefore significantly outperforms predictions by humans and shallow artificial neural networks (ANNs with few hidden layers) that both have more than 1.3% of errorsComment: arXiv admin note: substantial text overlap with arXiv:2204.1314

    High resolution SAR-image classification

    Get PDF
    In this report we propose a novel classification algorithm for high and very high resolution synthetic aperture radar (SAR) amplitude images that combines the Markov random field approach to Bayesian image classification and a finite mixture technique for probability density function estimation. The finite mixture modeling is done by dictionary-based stochastic expectation maximization amplitude histogram estimation approach. The developed semiautomatic algorithm is extended to an important case of multi-polarized SAR by modeling the joint distributions of channels via copulas. The accuracy of the proposed algorithm is validated for the application of wet soil classification on several high resolution SAR images acquired by TerraSAR-X and COSMO-SkyMed

    A novel classification algorithm based on incremental semi-supervised support vector machine

    Get PDF
    For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM) is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a “soft-start” approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment

    Credal Fusion of Classifications for Noisy and Uncertain Data

    Get PDF
    This paper reports on an investigation in classification technique employed to classify noised and uncertain data. However, classification is not an easy task. It is a significant challenge to discover knowledge from uncertain data. In fact, we can find many problems. More time we don't have a good or a big learning database for supervised classification. Also, when training data contains noise or missing values, classification accuracy will be affected dramatically. So to extract groups from  data is not easy to do. They are overlapped and not very separated from each other. Another problem which can be cited here is the uncertainty due to measuring devices. Consequentially classification model is not so robust and strong to classify new objects. In this work, we present a novel classification algorithm to cover these problems. We materialize our main idea by using belief function theory to do combination between classification and clustering. This theory treats very well imprecision and uncertainty linked to classification. Experimental results show that our approach has ability to significantly improve the quality of classification of generic database
    corecore