19 research outputs found

    On the influence of reference Mahalanobis distance space for quality classification of complex metal parts using vibrations

    Get PDF
    Mahalanobis distance (MD) is a well-known metric in multivariate analysis to separate groups or populations. In the context of the Mahalanobis-Taguchi system (MTS), a set of normal observations are used to obtain their MD values and construct a reference Mahalanobis distance space, for which a suitable classification threshold can then be introduced to classify new observations as normal/abnormal. Aiming at enhancing the performance of feature screening and threshold determination in MTS, the authors have recently proposed an integrated Mahalanobis classification system (IMCS) algorithm with robust classification performance. However, the reference MD space considered in either MTS or IMCS is only based on normal samples. In this paper, an investigation on the influence of the reference MD space based on a set of (i) normal samples, (ii) abnormal samples, and (iii) both normal and abnormal samples for classification is performed. The potential of using an alternative MD space is evaluated for sorting complex metallic parts, i.e., good/bad structural quality, based on their broadband vibrational spectra. Results are discussed for a sparse and imbalanced experimental case study of complex-shaped metallic turbine blades with various damage types; a rich and balanced numerical case study of dogbone-cylinders is also considered

    Characterization of bees algorithm into the Mahalanobis-Taguchi system for classification

    Get PDF
    Mahalanobis-Taguchi System (MTS) is a pattern recognition tool employing Mahalanobis Distance (MD) and Taguchi Robust Engineering philosophy to explore and exploit data in multidimensional systems. In order to improve recognition accuracy of the MTS, features that do not provide useful and beneficial information to the recognition function is removed. A matrix called Orthogonal Array (OA) to search for the useful features is utilized by MTS to accomplished the search. However, the deployment of OA as the feature selection search method is seen as ineffective. The fixed-scheme structure of the OA provides a non-heuristic search nature which leads to suboptimal solution. Therefore, it is the objective of this research to develop an algorithm utilizing Bees Algorithm (BA) to replace the OA. It will act as the alternative feature selection search strategy in order to enhance the search mechanism in a more heuristic manner. To understand the mechanism of the Bees Algorithm, the characteristics of the algorithmic nature of the algorithm is determined. Unlike other research reported in the literature, the proposed characterization framework is similar to Taguchi-sound approach because Larger the Better (LTB) type of signal-to-noise formulation is used as the algorithm’s objective function. The Smallest Position Value (SPV) discretization method is adopted by which the combinations of features are indexed in an enumeration list consisting of all possible feature combinations. The list formed a search landscape for the bee agents in exploring the potential solution. The proposed characterization framework is validated by comparing it against three different case studies, all focused on performance in terms of Signal-to-Noise Ratio gain (SNR gain), classification accuracy and computational speed against the OA. The results from the case studies showed that the characterization of the BA into the MTS framework improved the performance of the MTS particularly on the SNR gain. It recorded more than 50% improvement (on average) and nearly 4% improvement on the classification accuracy (on average) in comparison to the OA. However, the OA on average was found to be 30 times faster than the BA in terms of computational speed. Future research on improving the computational speed aspect of the BA is suggested. This study concludes that the characterization of BA into the MTS optimization methodology effectively improved the performances of the MTS, particularly with respect of the SNR gain performance and the classification accuracy when compared to the OA

    Integration of mahalanobis-taguchi system and activity based costing in decision making for remanufacturing

    Get PDF
    Classifying components at the end of life (EOL) into remanufacture, repair or dispose is still a major concern to automotive industries. Prior to this study, no specific approach is reported as a guide line to determine critical crankpins that justifying economical remanufacturing process. Traditional cost accounting (TCA) has been used widely by remanufacturing industries but this is not a good measure of estimating the actual manufacturing costs per unit as compared to activity based costing (ABC). However, the application of ABC method in estimating remanufactured cost is rarely reported. These issues were handled separately without a proper integration to make remanufacturing decision which frequently results into uneconomical operating cost and finally the decision becomes less accurate. The aim of this work is to develop a suitable pattern recognition method for classifying crankshaft into three different EOL groups and subsequently evaluates the critical and non-critical crankpins of the used crankshaft using Mahalanobis-Taguchi System (MTS). A remanufacturability assessment technique was developed using Microsoft Excel spreadsheet on pattern recognition and critical crankpins evaluation, and finally integrates these information into a similar spreadsheet with ABC to make decision whether the crankshaft is to be remanufactured, repaired or disposed. The developed scatter diagram was able to recognize group pattern of EOL crankshaft which later was successfully used to determine critical crankpins required for remanufacturing process. The proposed method can serve as a useful approach to the remanufacturing industries for systematically evaluate and decide EOL components for further processing. Case study on six engine models, the result shows that three engines can be securely remanufactured at above 40% profit margin while another two engines are still viable to remanufacture but with less profit margin. In contrast, only two engines can be securely remanufactured due overcharge when using TCA. This inaccuracy affects significantly the overall remanufacturing activities and revenue of the industry. In conclusion, the proposed integration on pattern recognition, parameter evaluation and costing assists the decision making process to effectively remanufacture EOL automotive components as confirmed by Head of workshop of Motor Teknologi Industri Sdn. Bhd

    Hybrid bootstrap-based approach with binary artificial bee colony and particle swarm optimization in Taguchi's T-Method

    Get PDF
    Taguchi's T-Method is one of the Mahalanobis Taguchi System (MTS)-ruled prediction techniques that has been established specifically but not limited to small, multivariate sample data. When evaluating data using a system such as the Taguchi's T-Method, bias issues often appear due to inconsistencies induced by model complexity, variations between parameters that are not thoroughly configured, and generalization aspects. In Taguchi's T-Method, the unit space determination is too reliant on the characteristics of the dependent variables with no appropriate procedures designed. Similarly, the least square-proportional coefficient is well known not to be robust to the effect of the outliers, which indirectly affects the accuracy of the weightage of SNR that relies on the model-fit accuracy. The small effect of the outliers in the data analysis may influence the overall performance of the predictive model unless more development is incorporated into the current framework. In this research, the mechanism of improved unit space determination was explicitly designed by implementing the minimum-based error with the leave-one-out method, which was further enhanced by embedding strategies that aim to minimize the impact of variance within each parameter estimator using the leave-one-out bootstrap (LOOB) and 0.632 estimates approaches. The complexity aspect of the prediction model was further enhanced by removing features that did not provide valuable information on the overall prediction. In order to accomplish this, a matrix called Orthogonal Array (OA) was used within the existing Taguchi's T-Method. However, OA's fixed-scheme matrix, as well as its drawback in coping with the high-dimensionality factor, leads to a sub- optimal solution. On the other hand, the usage of SNR, decibel (dB) as its objective function proved to be a reliable measure. The architecture of a Hybrid Binary Artificial Bee Colony and Particle Swarm Optimization (Hybrid Binary ABC-PSO), including the Binary Bitwise ABC (BitABC) and Probability Binary PSO (PBPSO), has been developed as a novel search engine that helps to cater the limitation of OA. The SNR (dB) and mean absolute error (MAE) were the main part of the performance measure used in this research. The generalization aspect was a fundamental addition incorporated into this research to control the effect of overfitting in the analysis. The proposed enhanced parameter estimators with feature selection optimization in this analysis had been tested on 10 case studies and had improved predictive accuracy by an average of 46.21% depending on the cases. The average standard deviation of MAE, which describes the variability impact of the optimized method in all 10 case studies, displayed an improved trend relative to the Taguchi’s T-Method. The need for standardization and a robust approach to outliers is recommended for future research. This study proved that the developed architecture of Hybrid Binary ABC-PSO with Bootstrap and minimum-based error using leave-one-out as the proposed parameter estimators enhanced techniques in the methodology of Taguchi's T-Method by effectively improving its prediction accuracy

    Application of Mahalanobis-Taguchi system in descending case of methadone flexi dispensing (MFlex) program

    Get PDF
    Patient under methadone flexi dispensing (MFlex) program is subjected to do methadone dosage trends for descending case since no parameters were employed to identify the patient who has potential rate of recovery. Consequently, the existing system does not have a stable ecosystem towards classification and optimization due to inaccurate measurement methods and lack of justification of significant parameters which will influence the accuracy of diagnosis. The objective is to apply Mahalanobis-Taguchi system (MTS) in the MFlex program as it has never been done in the previous studies. The data is collected at Bandar Pekan clinic with 16 parameters. Two types of MTS methods are used like RT-Method and T-Method for classification and optimization respectively. In classification of descending case, the average Mahalanobis distance (MD) of healthy is 1.0000 and unhealthy is 11123.9730. In optimization of descending case, there are 9 parameters of positive degree of contribution. 6 unknown samples have been diagnosed using MTS with different number of positive and negative degree of contribution to achieve lower MD. Type 6 of 6 modifications has been selected as the best proposed solution. In conclusion, a pharmacist from Bandar Pekan clinic has confirmed that MTS is able to solve a problem in classification and optimization of MFlex program

    Multivariate Analysis in Management, Engineering and the Sciences

    Get PDF
    Recently statistical knowledge has become an important requirement and occupies a prominent position in the exercise of various professions. In the real world, the processes have a large volume of data and are naturally multivariate and as such, require a proper treatment. For these conditions it is difficult or practically impossible to use methods of univariate statistics. The wide application of multivariate techniques and the need to spread them more fully in the academic and the business justify the creation of this book. The objective is to demonstrate interdisciplinary applications to identify patterns, trends, association sand dependencies, in the areas of Management, Engineering and Sciences. The book is addressed to both practicing professionals and researchers in the field

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science

    Development of advanced criteria for blade root design and optimization

    Get PDF
    In gas and steam turbine engines, blade root attachments are considered as critical components which require special attention for design. The traditional method of root design required high experienced engineers yet the strength of the material was not fully exploited in most cases. In the current thesis, different methodologies for automatic design and optimization of the blade root has been evaluated. Moreover, some methods for reducing the computational time have been proposed. First, a simplified analytical model of the fir-tree was developed in order to evaluate mean stress in different sections of the blade root and disc groove. Then, a more detailed two-dimensional shape of the attachment capable to be analyzed in finite element (FE) analysis was developed for dovetail and fir-tree. The model was developed to be general in a way to include all possible shapes of the attachment. Then the projection of the analytical model over the 2D model was performed to compare the results obtained from analytical and FE methods. This comparison is essential in the later use of analytical evaluation of the fir-tree as a reduction technique of searching domain optimization. Moreover, the possibility of predicting the contact normal stress of the blade and disc attachment by the use of a punch test was evaluated. A puncher composed of a flat surface and rounded edge was simulated equivalent to a sample case of a dovetail. The stress profile of the contact in analytical, 2d and 3d for puncher and dovetail was compared. As an optimizer Genetic Algorithm (GA) was described and different rules affecting this algorithm was introduced. In order to reduce the number of callbacks to high fidelity finite element (FE) method, the surrogate functions were evaluated and among them, the Kriging function was selected to be constructed for use in the current study. Its efficiency was evaluated within a numerical optimization of a single lob. In this study, the surrogate model is not used solely in finding the optimum of the attachment shape as it may provide low accuracy but in order to benefit its fast evaluation and diminish its low accuracy drawback, the Kriging function (KRG) was used within GA as a pre-evaluation of the candidate before performing FE analysis. Moreover, the feasible and non-feasible space in a multi-dimensional complex searching domain of the attachment geometry is explained and also the challenge of a multi-district domain is tackled with a new mutation operation. In order to rectify the non-continuous domain, an adaptive penalty method based on Latin Hypercube Sampling (LHS) was proposed which could successfully improve the optimization convergence. Furthermore, different topologies of the contact in a dovetail were assessed. Four different types of contact were modeled and optimized under the same loading and boundary conditions. The punch test was also assessed with different contact shapes. In addition, the state of stress for the dovetail in different rotational speed with different types of contact was assessed. In the results and discussion, an optimization of a dovetail with the analytical approach was performed and the optimum was compared with the one obtained by FE analysis. It was found that the analytical approach has the advantage of fast evaluation and if constraints are well defined the results are comparable to the FE solution. Then, a Kriging function was embedded within the GA optimization and the approach was evaluated in an optimization of a dovetail. The results revealed that the low computational cost of the surrogate model is an advantage and the low accuracy would be diminished in a collaboration of FE and surrogate models. Later, the capability of employing the analytical approach in a fir-tree optimization is assessed. As the fir-tree geometry has a higher complexity working domain in comparison to the dovetail, the results would be consistent for the dovetail also. Different methods are assessed and compared. In the first attempt, the analytical approach was adopted as a filter to select out the least probable fit candidates. This method could provide a 7\% improvement in convergence. In another attempt, the proposed adaptive penalty method was added to the optimization which successfully found the reasonable optimum with 47\% reduction in computational cost. Later, a combination of analytical and FE models was joined in a multi-objective multi-level optimization which provided 32\% improvement with less error comparing to the previous method. In the last evaluation of this type, the analytical approach was solely used in a multi-objective optimization in which the results were selected according to an FE evaluation of most fit candidates. This approach although provided 86\% improvement in computational time reduction but it depends highly on the case under investigation and provides low accuracy in the final solution. Furthermore, a robust optimum was found for both dovetail and fir-tree in a multi-objective optimization. In this trial, the proposed adaptive penalty method in addition to the surrogate model was also involved

    Tool wear monitoring in machining of stainless steel

    Get PDF
    monitoring systems for automated machines must be capable of operating on-line and interpret the working condition of machining process at a given point in time because it is an automated and unmanned system. But this has posed a challenge that lead to this research study. Generally, optimization of machining process can be categorized as minimization of tool wear, minimization of operating cost, maximization of process output and optimization of machine parameter. Tool wear is a complex phenomenon, capable of reducing surface quality, increases power consumption and increased reflection rate of machined parts. Tool wear has a direct effect on the quality of the surface finish for any given work-piece, dimensional precision and ultimately the cost of parts produced. Tool wear usually occur in combination with the principal wear mode which depends on cutting conditions, tool insert geometry, work piece and tool material. Therefore, there is a need to develop a continuous tool monitoring systems that would notify operator the state of tool to avoid tool failure or undesirable circumstances. Tool wear monitoring system for macro-milling has been studied using design and analysis of experiment (DOE) approach. Regression analysis, analysis of variance (ANOVA), Box Behnken and Response Surface Methodology (RSM). These analysis tools were used to model the tool wear. Hence, further investigations were carried out on the data acquired using signal processing and Neural networks frame work to validate the model. The effects of cutting parameters are evaluated and the optimal cutting conditions are determined. The interaction of cutting parameters is established to illustrate the intrinsic relationship between cutting parameters, tool wear and material removal rate. It was observed that when working with stainless steel 316, a maximum tool wear value of 0.29mm was achieved through optimization at low values of feed about 0.06mm/rev, speed of 4050mm/min and depth of cut about 2mm
    corecore