130 research outputs found

    Application of Mahalanobis-Taguchi system in ascending case of methadone flexi dispensing (MFlex) program

    Get PDF
    Patient under methadone flexi dispensing (MFlex) program is subjected to do methadone dosage trends like ascending case since no parameters have been used to identify the patient who has potential rate of recovery. Consequently, the existing system does not have a stable ecosystem towards classification and optimization due to inaccurate measurement methods and lack of justification of significant parameters which will influence the accuracy of diagnosis. The objective is to apply Mahalanobis-Taguchi system (MTS) in the MFlex program as it has never been done in previous studies. The data is collected at Bandar Pekan clinic with 16 parameters. Two types of MTS methods are used like RT-Method and T-Method for classification and optimization respectively. As a result, RT-Method is able to classify the average Mahalanobis distance (MD) of healthy and unhealthy with 1.0000 and 21387.1249 respectively. Moreover, T-Method is able to evaluate the significant parameters with 10 parameters of positive degree of contribution. 6 unknown samples have been diagnosed using MTS with different number of positive and negative degree of contribution to achieve lower MD. Type 2 of 6 modifications has been selected as the best proposed solution as it shows the lowest positive MD value. In conclusion, a pharmacist from Bandar Pekan clinic has confirmed that MTS is able to solve a problem in classification and optimization of MFlex program

    Hybrid bootstrap-based approach with binary artificial bee colony and particle swarm optimization in Taguchi's T-Method

    Get PDF
    Taguchi's T-Method is one of the Mahalanobis Taguchi System (MTS)-ruled prediction techniques that has been established specifically but not limited to small, multivariate sample data. When evaluating data using a system such as the Taguchi's T-Method, bias issues often appear due to inconsistencies induced by model complexity, variations between parameters that are not thoroughly configured, and generalization aspects. In Taguchi's T-Method, the unit space determination is too reliant on the characteristics of the dependent variables with no appropriate procedures designed. Similarly, the least square-proportional coefficient is well known not to be robust to the effect of the outliers, which indirectly affects the accuracy of the weightage of SNR that relies on the model-fit accuracy. The small effect of the outliers in the data analysis may influence the overall performance of the predictive model unless more development is incorporated into the current framework. In this research, the mechanism of improved unit space determination was explicitly designed by implementing the minimum-based error with the leave-one-out method, which was further enhanced by embedding strategies that aim to minimize the impact of variance within each parameter estimator using the leave-one-out bootstrap (LOOB) and 0.632 estimates approaches. The complexity aspect of the prediction model was further enhanced by removing features that did not provide valuable information on the overall prediction. In order to accomplish this, a matrix called Orthogonal Array (OA) was used within the existing Taguchi's T-Method. However, OA's fixed-scheme matrix, as well as its drawback in coping with the high-dimensionality factor, leads to a sub- optimal solution. On the other hand, the usage of SNR, decibel (dB) as its objective function proved to be a reliable measure. The architecture of a Hybrid Binary Artificial Bee Colony and Particle Swarm Optimization (Hybrid Binary ABC-PSO), including the Binary Bitwise ABC (BitABC) and Probability Binary PSO (PBPSO), has been developed as a novel search engine that helps to cater the limitation of OA. The SNR (dB) and mean absolute error (MAE) were the main part of the performance measure used in this research. The generalization aspect was a fundamental addition incorporated into this research to control the effect of overfitting in the analysis. The proposed enhanced parameter estimators with feature selection optimization in this analysis had been tested on 10 case studies and had improved predictive accuracy by an average of 46.21% depending on the cases. The average standard deviation of MAE, which describes the variability impact of the optimized method in all 10 case studies, displayed an improved trend relative to the Taguchi’s T-Method. The need for standardization and a robust approach to outliers is recommended for future research. This study proved that the developed architecture of Hybrid Binary ABC-PSO with Bootstrap and minimum-based error using leave-one-out as the proposed parameter estimators enhanced techniques in the methodology of Taguchi's T-Method by effectively improving its prediction accuracy

    DEVELOPMENT OF DIAGNOSTIC AND PROGNOSTIC METHODOLOGIES FOR ELECTRONIC SYSTEMS BASED ON MAHALANOBIS DISTANCE

    Get PDF
    Diagnostic and prognostic capabilities are one aspect of the many interrelated and complementary functions in the field of Prognostic and Health Management (PHM). These capabilities are sought after by industries in order to provide maximum operational availability of their products, maximum usage life, minimum periodic maintenance inspections, lower inventory cost, accurate tracking of part life, and no false alarms. Several challenges associated with the development and implementation of these capabilities are the consideration of a system's dynamic behavior under various operating environments; complex system architecture where the components that form the overall system have complex interactions with each other with feed-forward and feedback loops of instructions; the unavailability of failure precursors; unseen events; and the absence of unique mathematical techniques that can address fault and failure events in various multivariate systems. The Mahalanobis distance methodology distinguishes multivariable data groups in a multivariate system by a univariate distance measure calculated from the normalized value of performance parameters and their correlation coefficients. The Mahalanobis distance measure does not suffer from the scaling effect--a situation where the variability of one parameter masks the variability of another parameter, which happens when the measurement ranges or scales of two parameters are different. A literature review showed that the Mahalanobis distance has been used for classification purposes. In this thesis, the Mahalanobis distance measure is utilized for fault detection, fault isolation, degradation identification, and prognostics. For fault detection, a probabilistic approach is developed to establish threshold Mahalanobis distance, such that presence of a fault in a product can be identified and the product can be classified as healthy or unhealthy. A technique is presented to construct a control chart for Mahalanobis distance for detecting trends and biasness in system health or performance. An error function is defined to establish fault-specific threshold Mahalanobis distance. A fault isolation approach is developed to isolate faults by identifying parameters that are associated with that fault. This approach utilizes the design-of-experiment concept for calculating residual Mahalanobis distance for each parameter (i.e., the contribution of each parameter to a system's health determination). An expected contribution range for each parameter estimated from the distribution of residual Mahalanobis distance is used to isolate the parameters that are responsible for a system's anomalous behavior. A methodology to detect degradation in a system's health using a health indicator is developed. The health indicator is defined as the weighted sum of a histogram bin's fractional contribution. The histogram's optimal bin width is determined from the number of data points in a moving window. This moving window approach is utilized for progressive estimation of the health indicator over time. The health indicator is compared with a threshold value defined from the system's healthy data to indicate the system's health or performance degradation. A symbolic time series-based health assessment approach is developed. Prognostic measures are defined for detecting anomalies in a product and predicting a product's time and probability of approaching a faulty condition. These measures are computed from a hidden Markov model developed from the symbolic representation of product dynamics. The symbolic representation of a product's dynamics is obtained by representing a Mahalanobis distance time series in symbolic form. Case studies were performed to demonstrate the capability of the proposed methodology for real time health monitoring. Notebook computers were exposed to a set of environmental conditions representative of the extremes of their life cycle profiles. The performance parameters were monitored in situ during the experiments, and the resulting data were used as a training dataset. The dataset was also used to identify specific parameter behavior, estimate correlation among parameters, and extract features for defining a healthy baseline. Field-returned computer data and data corresponding to artificially injected faults in computers were used as test data

    Wind Turbine Fault Detection: an Unsupervised vs Semi-Supervised Approach

    Get PDF
    The need for renewable energy has been growing in recent years for the reasons we all know, wind power is no exception. Wind turbines are complex and expensive structures and the need for maintenance exists. Conditioning Monitoring Systems that make use of supervised machine learning techniques have been recently studied and the results are quite promising. Though, such systems still require the physical presence of professionals but with the advantage of gaining insight of the operating state of the machine in use, to decide upon maintenance interventions beforehand. The wind turbine failure is not an abrupt process but a gradual one. The main goal of this dissertation is: to compare semi-supervised methods to at tack the problem of automatic recognition of anomalies in wind turbines; to develop an approach combining the Mahalanobis Taguchi System (MTS) with two popular fuzzy partitional clustering algorithms like the fuzzy c-means and archetypal analysis, for the purpose of anomaly detection; and finally to develop an experimental protocol to com paratively study the two types of algorithms. In this work, the algorithms Local Outlier Factor (LOF), Connectivity-based Outlier Factor (COF), Cluster-based Local Outlier Factor (CBLOF), Histogram-based Outlier Score (HBOS), k-nearest-neighbours (k-NN), Subspace Outlier Detection (SOD), Fuzzy c-means (FCM), Archetypal Analysis (AA) and Local Minimum Spanning Tree (LoMST) were explored. The data used consisted of SCADA data sets regarding turbine sensorial data, 8 to tal, from a wind farm in the North of Portugal. Each data set comprises between 1070 and 1096 data cases and characterized by 5 features, for the years 2011, 2012 and 2013. The analysis of the results using 7 different validity measures show that, the CBLOF al gorithm got the best results in the semi-supervised approach while LoMST won in the unsupervised scenario. The extension of both FCM and AA got promissing results.A necessidade de produzir energia renovável tem vindo a crescer nos últimos anos pelas razões que todos sabemos, a energia eólica não é excepção. As turbinas eólicas são es truturas complexas e caras e a necessidade de manutenção existe. Sistemas de Condição Monitorizada utilizando técnicas de aprendizagem supervisionada têm vindo a ser estu dados recentemente e os resultados são bastante promissores. No entanto, estes sistemas ainda exigem a presença física de profissionais, mas com a vantagem de obter informa ções sobre o estado operacional da máquina em uso, para decidir sobre intervenções de manutenção antemão. O principal objetivo desta dissertação é: comparar métodos semi-supervisionados para atacar o problema de reconhecimento automático de anomalias em turbinas eólicas; desenvolver um método que combina o Mahalanobis Taguchi System (MTS) com dois mé todos de agrupamento difuso bem conhecidos como fuzzy c-means e archetypal analysis, no âmbito de deteção de anomalias; e finalmente desenvolver um protocolo experimental onde é possível o estudo comparativo entre os dois diferentes tipos de algoritmos. Neste trabalho, os algoritmos Local Outlier Factor (LOF), Connectivity-based Outlier Factor (COF), Cluster-based Local Outlier Factor (CBLOF), Histogram-based Outlier Score (HBOS), k-nearest-neighbours (k-NN), Subspace Outlier Detection (SOD), Fuzzy c-means (FCM), Archetypal Analysis (AA) and Local Minimum Spanning Tree (LoMST) foram explorados. Os conjuntos de dados utilizados provêm do sistema SCADA, referentes a dados sen soriais de turbinas, 8 no total, com origem num parque eólico no Norte de Portugal. Cada um está compreendendido entre 1070 e 1096 observações e caracterizados por 5 caracte rísticas, para os anos 2011, 2012 e 2013. A ánalise dos resultados através de 7 métricas de validação diferentes mostraram que, o algoritmo CBLOF obteve os melhores resultados na abordagem semi-supervisionada enquanto que o LoMST ganhou na abordagem não supervisionada. A extensão do FCM e do AA originou resultados promissores

    Sensor data-based decision making

    Get PDF
    Increasing globalization and growing industrial system complexity has amplified the interest in the use of information provided by sensors as a means of improving overall manufacturing system performance and maintainability. However, utilization of sensors can only be effective if the real-time data can be integrated into the necessary business processes, such as production planning, scheduling and execution systems. This integration requires the development of intelligent decision making models that can effectively process the sensor data into information and suggest appropriate actions. To be able to improve the performance of a system, the health of the system also needs to be maintained. In many cases a single sensor type cannot provide sufficient information for complex decision making including diagnostics and prognostics of a system. Therefore, a combination of sensors should be used in an integrated manner in order to achieve desired performance levels. Sensor generated data need to be processed into information through the use of appropriate decision making models in order to improve overall performance. In this dissertation, which is presented as a collection of five journal papers, several reactive and proactive decision making models that utilize data from single and multi-sensor environments are developed. The first paper presents a testbed architecture for Auto-ID systems. An adaptive inventory management model which utilizes real-time RFID data is developed in the second paper. In the third paper, a complete hardware and inventory management solution, which involves the integration of RFID sensors into an extremely low temperature industrial freezer, is presented. The last two papers in the dissertation deal with diagnostic and prognostic decision making models in order to assure the healthy operation of a manufacturing system and its components. In the fourth paper a Mahalanobis-Taguchi System (MTS) based prognostics tool is developed and it is used to estimate the remaining useful life of rolling element bearings using data acquired from vibration sensors. In the final paper, an MTS based prognostics tool is developed for a centrifugal water pump, which fuses information from multiple types of sensors in order to take diagnostic and prognostics decisions for the pump and its components --Abstract, page iv

    CONSTRAINED MULTI-GROUP PROJECT ALLOCATION USING MAHALANOBIS DISTANCE

    Get PDF
    Optimal allocation is one of the most active research areas in operation research using binary integer variables. The allocation of multi constrained projects among several options available along a given planning horizon is an especially significant problem in the general area of item classification. The main goal of this dissertation is to develop an analytical approach for selecting projects that would be most attractive from an economic point of view to be developed or allocated among several options, such as in-house engineers and private contractors (in transportation projects). A relevant limiting resource in addition to the availability of funds is the in-house manpower availability. In this thesis, the concept of Mahalanobis distance (MD) will be used as the classification criterion. This is a generalization of the Euclidean distance that takes into account the correlation of the characteristics defining the scope of a project. The desirability of a given project to be allocated to an option is defined in terms of its MD to that particular option. Ideally, each project should be allocated to its closest option. This, however, may not be possible because of the available levels of each relevant resource. The allocation process is formulated mathematically using two Binary Integer Programming (BIP) models. The first formulation maximizes the dollar value of benefits derived by the traveling public from those projects being implemented subject to a budget, total sum of MD, and in-house manpower constraints. The second formulation minimizes the total sum of MD subject to a budget and the in-house manpower constraints. The proposed solution methodology for the BIP models is based on the branchand- bound method. In particular, one of the contributions of this dissertation is the development of a strategy for branching variables and node selection that is consistent with allocation priorities based on MD to improve the branch-and-bound performance level as well as handle a large scale application. The suggested allocation process includes: (a) multiple allocation groups; (b) multiple constraints; (c) different BIP models. Numerical experiments with different projects and options are considered to illustrate the application of the proposed approach

    Multivariate knock detection for development and production applications

    Get PDF
    Combustion knock is a limiting factor in the efficiency of spark ignition internal combustion engines. Therefore, optimization of design and control dictates that an engine must operate as close to the knock limits as possible without allowing knock to occur. This is the challenge presented for knock detection systems. In-cylinder pressure techniques are considered the most reliable method for knock detection; however, installation of pressure transducers in the combustion chamber is both difficult and expensive. This leads to the requirement of a low cost, non-intrusive alternative. Although the current vibration-based methods meet these requirements, their susceptibility to background noise greatly reduces their effectiveness. Thus, the goal of consistently achieving the optimal operating conditions cannot be achieved. This research involves the use of multivariate analysis of vibration-based knock signals to improve the detection system reliability through enhanced signal to noise ratio. The techniques proposed apply a relatively new philosophy developed by Genichi Taguchi for pattern recognition based on the statistical parameter Mahalanobis Distance. Application of these methods results in the development of a new knock detection strategy which shows a significant improvement in determining the presence of knock. The development and validation of this vibration-based system required the use of in-cylinder pressure data for initial classification of knocking and non-knocking operation. This necessitated an independent study to validate pressure transducer type and mounting location. Results of this study are detailed herein

    Assessment of contributing factors to the reduction of diarrhea in rural communities of Para, Brazil

    Get PDF
    In developing communities the occurrence of diarrhea has been reported at elevated levels as compared to those communities in more developed regions. Diarrheal diseases were linked to over one million deaths in 2012 throughout the world. While multiple pathways are present for the transmission of diarrheal diseases, water has been the focus for many aid organizations. Point-of-use (POU) water treatment methods are a common tool used by aid organizations in efforts to provide potable water. The CAWST biosand filter is a POU tool that has shown removal effectiveness of pathogenic microorganisms ranging from 90-99%. However, minimal literature was found that reported on the effectiveness of the filter within the larger body of the complex system found in all communities. Therefore a hypothesis was derived to confirm that the intervention of a CAWST biosand filter is the most significant factor in the reduction of the diarrheal health burden within households in developing regions. Communities located along the Amazon River in Para, Brazil were selected for study. Structural Equation Modeling (SEM) was utilized to aid in representing the complex set of relationships within the communities. The Mahalanobis-Taguchi Strategy (MTS) was also used to confirm variable significance in the SEM model. Results show that while the biosand filter does aid in the reduction of diarrheal occurrences it is not the most significant factor. Results varied on which factor influenced diarrheal occurrences the greatest but consistently included education, economic status, and sanitation. Further, results from the MTS analysis reported education as the largest factor influencing household health. Continued work is needed for further understanding of these factors and their relationships to diarrhea reduction. --Abstract, page iv

    PixColor: Pixel Recursive Colorization

    Full text link
    We propose a novel approach to automatically produce multiple colorized versions of a grayscale image. Our method results from the observation that the task of automated colorization is relatively easy given a low-resolution version of the color image. We first train a conditional PixelCNN to generate a low resolution color for a given grayscale image. Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image. We demonstrate that our approach produces more diverse and plausible colorizations than existing methods, as judged by human raters in a "Visual Turing Test"
    corecore