15 research outputs found
Gearbox fault diagnosis based on VMD-MSE and adaboost classifier
Accurate and efficient fault diagnosis is of great importance for gearbox. This study proposed a fault diagnosis based on variational mode decomposition (VMD) – multiscale entropy (MSE) and adaboost algorithm. First, the VMD is employed to decompose the raw signal in time-frequency domain. Then, MSE is computed to generate the feature vectors. Finally, the classifier based on adaboost is training and several weak classifiers form a strong classifier to realize the fault diagnosis. The feasibility and accuracy of the method is validated by the data from the Prognostics and Health Management Society for the 2009 data challenge competition
Recommended from our members
Machine Learning Decision Tree Models for Differentiation of Posterior Fossa Tumors Using Diffusion Histogram Analysis and Structural MRI Findings.
We applied machine learning algorithms for differentiation of posterior fossa tumors using apparent diffusion coefficient (ADC) histogram analysis and structural MRI findings. A total of 256 patients with intra-axial posterior fossa tumors were identified, of whom 248 were included in machine learning analysis, with at least 6 representative subjects per each tumor pathology. The ADC histograms of solid components of tumors, structural MRI findings, and patients' age were applied to construct decision models using Classification and Regression Tree analysis. We also compared different machine learning classification algorithms (i.e., naĂŻve Bayes, random forest, neural networks, support vector machine with linear and polynomial kernel) for dichotomized differentiation of the 5 most common tumors in our cohort: metastasis (n = 65), hemangioblastoma (n = 44), pilocytic astrocytoma (n = 43), ependymoma (n = 27), and medulloblastoma (n = 26). The decision tree model could differentiate seven tumor histopathologies with terminal nodes yielding up to 90% accurate classification rates. In receiver operating characteristics (ROC) analysis, the decision tree model achieved greater area under the curve (AUC) for differentiation of pilocytic astrocytoma (p = 0.020); and atypical teratoid/rhabdoid tumor ATRT (p = 0.001) from other types of neoplasms compared to the official clinical report. However, neuroradiologists' interpretations had greater accuracy in differentiating metastases (p = 0.001). Among different machine learning algorithms, random forest models yielded the highest accuracy in dichotomized classification of the 5 most common tumor types; and in multiclass differentiation of all tumor types random forest yielded an averaged AUC of 0.961 in training datasets, and 0.873 in validation samples. Our study demonstrates the potential application of machine learning algorithms and decision trees for accurate differentiation of brain tumors based on pretreatment MRI. Using easy to apply and understandable imaging metrics, the proposed decision tree model can help radiologists with differentiation of posterior fossa tumors, especially in tumors with similar qualitative imaging characteristics. In particular, our decision tree model provided more accurate differentiation of pilocytic astrocytomas from ATRT than by neuroradiologists in clinical reads
A new feature-based wavelet completed local ternary pattern (FEAT-WCLTP) for texture and medical image classification
Nowadays, texture image descriptors are used in many important real-life applications. The use of texture analysis in texture and medical image classification has attracted considerable attention. Local Binary Patterns (LBP) is one of the simplest yet eff ective texture descriptors. But it has some limitations that may affect its accuracy. Hence, different variants of LBP were proposed to overcome LBP’s drawbacks and enhance its classification accuracy. Completed local ternary pattern (CLTP) is one of the significant LBP variants. However, CLTP suffers from two main limitations: the selection of the threshold value is manually based and the high dimensionality which is negatively affected the descriptor performance and leads to high computations. This research aims to improve the classification accuracy of CLTP and overcome the computational limitation by proposing new descriptors inspired by CLTP. Therefore, this research introduces two contributions: The first one is a proposed new descriptor that integrates redundant discrete wavelet transform (RDWT) with the original CLTP, namely, wavelet completed local ternary pattern (WCLTP). Extracting CLTP in wavelet transform will help increase the classification accuracy due to the shift invariant property of RDWT. Firstly, the image is decomposed into four sub-bands (LL, LH, HL, HH) by using RDWT. Then, CLTP is extracted based on the LL wavelet coefficients. The latter one is the reduction in the dimensionality of WCLTP by reducing its size and a proposed new texture descriptor, namely, feature-based wavelet completed local ternary pattern (FeatWCLTP). The proposed Feat-WCLTP can enhance CLTP’s performance and reduce high dimensionality. The mean and variance of the values of the selected texture pattern are used instead of the normal magnitude texture descriptor of CLTP. The performance of the proposed WCLTP and Feat-WCLTP was evaluated using four textures (i.e. OuTex, CUReT, UIUC and Kylberg) and two medical (i.e. 2D HeLa and Breast Cancer) datasets then compared with several well-known LBP variants. The proposed WCLTP outperformed the previous descriptors and achieved the highest classification accuracy in all experiments. The results for the texture dataset are 99.35% in OuTex, 96.57% in CUReT, 94.80% in UIUC and 99.88% in the Kylberg dataset. The results for the medical dataset are 84.19% in the 2D HeLa dataset and 92.14% in the Breast Cancer dataset. The proposed Feat-WCLTP not only overcomes the dimensionality problem but also considerably improves the classification accuracy. The results for Feat-WCLTP for texture dataset are 99.66% in OuTex, 96.89% in CUReT, 95.23% in UIUC and 99.92% in the Kylberg dataset. The results for the medical dataset are 84.42% in the 2D HeLa dataset and 89.12% in the Breast Cancer dataset. Moreover, the proposed Feat-WCLTP reduces the size of the feature vector for texture pattern (1,8) to 160 bins instead of 400 bins in WCLTP. The proposed WCLTP and Feat-WCLTP have better classification accuracy and dimensionality than the original CLTP
Structural MRI texture analysis for detecting Alzheimer's disease
Purpose:: Alzheimer’s disease (AD) has the highest worldwide prevalence of all neurodegenerative disorders, no cure, and low ratios of diagnosis accuracy at its early stage where treatments have some effect and can give some years of life quality to patients. This work aims to develop an automatic method to detect AD in 3 different stages, namely, control (CN), mild-cognitive impairment (MCI), and AD itself, using structural magnetic resonance imaging (sMRI). Methods:: A set of co-occurrence matrix and texture statistical measures (contrast, correlation, energy, homogeneity, entropy, variance, and standard deviation) were extracted from a two-level discrete wavelet transform decomposition of sMRI images. The discriminant capacity of the measures was analyzed and the most discriminant ones were selected to be used as features for feeding classical machine learning (cML) algorithms and a convolution neural network (CNN). Results:: The cML algorithms achieved the following classification accuracies: 93.3% for AD vs CN, 87.7% for AD vs MCI, 88.2% for CN vs MCI, and 75.3% for All vs All. The CNN achieved the following classification accuracies: 82.2% for AD vs CN, 75.4% for AD vs MCI, 83.8% for CN vs MCI, and 64% for All vs All. Conclusion:: In the evaluated cases, cML provided higher discrimination results than CNN. For the All vs All comparison, the proposedmethod surpasses by 4% the discrimination accuracy of the state-of-the-art methods that use structural MRI.info:eu-repo/semantics/publishedVersio
ADL-BSDF: A Deep Learning Framework for Brain Stroke Detection from MRI Scans towards an Automated Clinical Decision Support System
Deep learning has emerged to be efficient Artificial Intelligence (AI) phenomena to solve problems in healthcare industry. Particularly Convolutional Neural Network (CNN) models have attracted researchers due to their efficiency in medical image analysis. According to World Health Organization (WHO), rapidly developing cerebral malfunction, brain stroke, is the second leading cause of death across the globe. Brain MRI scans, when analysed quantitatively, play vital role in diagnosis and treatment of stroke. There are many existing methods built on deep learning for stroke diagnosis. However, an automatic, reliable and faster method that not only helps in stroke diagnosis but also demarcate affected regions as part of Clinical Decision Support System (CDSS) is much desired. Towards this objective, we proposed an Automated Deep Learning based Brain Stroke Detection Framework (ADL-BSDF). It does not rely on expertise of healthcare professional in diagnosis and know the extent of damage enabling physician to make quick decisions. The framework is realized by two algorithms proposed. The first algorithm known as CNN-based Deep Learning for Brain Stroke Detection (CNNDL-BSD) focuses on accurate detection of stroke. The second algorithm, Deep Auto encoder for Stroke Severity Detection (DA-SSD), focuses on revealing extent of damage or severity of the stroke. The framework is evaluated against state of the art deep learning models such as EfficientNet, ResNet50 and VGG16
Feasibility of atrial fibrillation detection from a novel wearable armband device
BACKGROUND: Atrial fibrillation (AF) is the world’s most common heart rhythm disorder and even several minutes of AF episodes can contribute to risk for complications, including stroke. However, AF often goes undiagnosed owing to the fact that it can be paroxysmal, brief, and asymptomatic. OBJECTIVE: To facilitate better AF monitoring, we studied the feasibility of AF detection using a continuous electrocardiogram (ECG) signal recorded from a novel wearable armband device. METHODS: In our 2-step algorithm, we first calculate the R-R interval variability–based features to capture randomness that can indicate a segment of data possibly containing AF, and subsequently discriminate normal sinus rhythm from the possible AF episodes. Next, we use density Poincaré plot-derived image domain features along with a support vector machine to separate premature atrial/ventricular contraction episodes from any AF episodes. We trained and validated our model using the ECG data obtained from a subset of the MIMIC-III (Medical Information Mart for Intensive Care III) database containing 30 subjects. RESULTS: When we tested our model using the novel wearable armband ECG dataset containing 12 subjects, the proposed method achieved sensitivity, specificity, accuracy, and F1 score of 99.89%, 99.99%, 99.98%, and 0.9989, respectively. Moreover, when compared with several existing methods with the armband data, our proposed method outperformed the others, which shows its efficacy. CONCLUSION: Our study suggests that the novel wearable armband device and our algorithm can be used as a potential tool for continuous AF monitoring with high accuracy
Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling
Aim: Multiple sclerosis is a severe brain and/or spinal cord disease. It may lead to a wide range of symptoms. Hence, the early diagnosis and treatment is quite important.Method: This study proposed a 14-layer convolutional neural network, combined with three advanced techniques: batch normalization, dropout, and stochastic pooling. The output of the stochastic pooling was obtained via sampling from a multinomial distribution formed from the activations of each pooling region. In addition, we used data augmentation method to enhance the training set. In total 10 runs were implemented with the hold-out randomly set for each run.Results: The results showed that our 14-layer CNN secured a sensitivity of 98.77 ± 0.35%, a specificity of 98.76 ± 0.58%, and an accuracy of 98.77 ± 0.39%.Conclusion: Our results were compared with CNN using maximum pooling and average pooling. The comparison shows stochastic pooling gives better performance than other two pooling methods. Furthermore, we compared our proposed method with six state-of-the-art approaches, including five traditional artificial intelligence methods and one deep learning method. The comparison shows our method is superior to all other six state-of-the-art approaches
Perspective Chapter: Artificial Intelligence in Multiple Sclerosis
In recent times, the words artificial intelligence, machine learning, and deep learning have been making a lot of buzz in different domains and especially in the healthcare sector. In disease areas like multiple sclerosis (MS), these intelligent systems have great potential in aiding the detection and prediction of disease progression and disability, identification of disease subtypes, monitoring, treatment, and novel drug-target identification. The different imaging techniques used to date in multiple sclerosis, various algorithms such as convolutional neural network, Support Vector Machine, long short-term memory networks, JAYA, Random Forest, Naive Bayesian, Sustain, DeepDTnet, and DTINet used in the various domains of multiple sclerosis are explored, along with used cases. Hence it is important for healthcare professionals to have knowledge on artificial intelligence for achieving better healthcare outcomes
ResOT: Resource-Efficient Oblique Trees for Neural Signal Classification
Classifiers that can be implemented on chip with minimal computational and
memory resources are essential for edge computing in emerging applications such
as medical and IoT devices. This paper introduces a machine learning model
based on oblique decision trees to enable resource-efficient classification on
a neural implant. By integrating model compression with probabilistic routing
and implementing cost-aware learning, our proposed model could significantly
reduce the memory and hardware cost compared to state-of-the-art models, while
maintaining the classification accuracy. We trained the resource-efficient
oblique tree with power-efficient regularization (ResOT-PE) on three neural
classification tasks to evaluate the performance, memory, and hardware
requirements. On seizure detection task, we were able to reduce the model size
by 3.4X and the feature extraction cost by 14.6X compared to the ensemble of
boosted trees, using the intracranial EEG from 10 epilepsy patients. In a
second experiment, we tested the ResOT-PE model on tremor detection for
Parkinson's disease, using the local field potentials from 12 patients
implanted with a deep-brain stimulation (DBS) device. We achieved a comparable
classification performance as the state-of-the-art boosted tree ensemble, while
reducing the model size and feature extraction cost by 10.6X and 6.8X,
respectively. We also tested on a 6-class finger movement detection task using
ECoG recordings from 9 subjects, reducing the model size by 17.6X and feature
computation cost by 5.1X. The proposed model can enable a low-power and
memory-efficient implementation of classifiers for real-time neurological
disease detection and motor decoding