3 research outputs found

    DETEKSI CACAT BANTALAN GELINDING BERBASIS ALGORITMA DECISION TREES DAN PARAMETER STATISTIK

    Get PDF
    Rolling bearings are a common machine element found in rotary machines. The components in the rolling bearing such as the inner race, outer race, rolling element, and cage are the parts that are often damaged. Traditionally spectrum analysis is used to diagnose bearing defects. However, spectrum analysis is not effectively applied to bearings with early defects because the vibration signal generated is dominated by frequency components from other machine elements, so the frequency of bearing defects cannot be observed. This study proposes an alternative method of detecting bearing defects based on vibration signals using machine learning with a decision tree algorithm. This method is more effective than the spectrum analysis method because machine learning is based on feature extraction and pattern recognition of vibration signal data, therefore, providing classification results directly without further analysis. Vibration signals were recorded using an accelerometer mounted on a bearing housing on a test rig. Nine-time domain statistical parameters and six frequency domain statistical parameters were extracted from the vibration signal and then used as input for decision trees. The results show that the decision trees algorithm gives an accuracy of 94.4% for classifying three rolling bearing conditions using the input of 6 selected frequency domain statistical parameters

    Impact of evaluation methods on decision tree accuracy

    Get PDF
    Decision trees are one of the most powerful and commonly used supervised learning algorithms in the field of data mining. It is important that a decision tree performs accurately when employed on unseen data; therefore, evaluation methods are used to measure the predictive performance of a decision tree classifier. However, the predictive accuracy of a decision tree is also dependant on the evaluation method chosen since training and testing sets of decision tree models are selected according to the evaluation methods. The aim of this thesis was to study and understand how using different evaluation methods might have an impact on decision tree accuracies when they are applied to different decision tree algorithms. Consequently, comprehensive research was made on decision trees and evaluation methods. Additionally, an experiment was conducted using ten different datasets, five decision tree algorithms and five different evaluation methods in order to study the relationship between evaluation methods and decision tree accuracies. The decision tree inducers were tested with Leave-one-out, 5-Fold Cross Validation, 10-Fold Cross Validation, Holdout 50 split and Holdout 66 split evaluation methods. According to the results, cross validation methods were superior to holdout methods in overall. Moreover, Holdout 50 split has performed the poorest in most of the datasets. The possible reasons behind these results have also been discussed in the thesis

    Impact of evaluation methods on decision tree accuracy

    Get PDF
    Decision trees are one of the most powerful and commonly used supervised learning algorithms in the field of data mining. It is important that a decision tree performs accurately when employed on unseen data; therefore, evaluation methods are used to measure the predictive performance of a decision tree classifier. However, the predictive accuracy of a decision tree is also dependant on the evaluation method chosen since training and testing sets of decision tree models are selected according to the evaluation methods. The aim of this thesis was to study and understand how using different evaluation methods might have an impact on decision tree accuracies when they are applied to different decision tree algorithms. Consequently, comprehensive research was made on decision trees and evaluation methods. Additionally, an experiment was conducted using ten different datasets, five decision tree algorithms and five different evaluation methods in order to study the relationship between evaluation methods and decision tree accuracies. The decision tree inducers were tested with Leave-one-out, 5-Fold Cross Validation, 10-Fold Cross Validation, Holdout 50 split and Holdout 66 split evaluation methods. According to the results, cross validation methods were superior to holdout methods in overall. Moreover, Holdout 50 split has performed the poorest in most of the datasets. The possible reasons behind these results have also been discussed in the thesis
    corecore