111 research outputs found

    A quality correlation algorithm for tolerance synthesis in manufacturing operations

    Get PDF
    The clause 6.1 of the ISO9001:2015 quality standard requires organisations to take specific actions to determine and address risks and opportunities in order to minimize undesired effects in the process and achieve process improvement. This paper proposes a new quality correlation algorithm to optimise tolerance limits of process variables across multiple processes. The algorithm uses reduced p-dimensional principal component scores to determine optimal tolerance limits and also embeds ISO9001:2015ā€™s risk based thinking approach. The corresponding factor and response variable pairs are chosen by analysing the mixed data set formulation proposed by Giannetti etl al. (2014) and co-linearity index algorithm proposed by Ransing et al. (2013). The goal of this tolerance limit optimisation problem is to make several small changes to the process in order to reduce undesired process variation. The optimal and avoid ranges of multiple process parameters are determined by analysing in-process data on categorical as well as continuous variables and process responses being transformed using the risk based thinking approach. The proposed approach has been illustrated by analysing in-process chemistry data for a nickel based alloy for manufacturing cast components for an aerospace foundry. It is also demonstrated how the approach embeds the risk based thinking into the in-process quality improvement process as required by the ISO9001:2015 standard

    ā€œIf only my foundry knew what it knows ā€¦ā€: A 7Epsilon perspective on root cause analysis and corrective action plans for ISO9001:2008

    Get PDF
    The famous quotes of a former Chairman, president and CEO of Texas Instruments and Chairman of HP ā€œif only we knew what we knowā€ are very much applicable to the foundry industry. Despite the fact that many advances have been made in the field of foundry technologies relating to simulation software, moulding machines, binder formulation and alloy development, poor quality still remains a major issue that affects many foundries not only in terms of lost revenues but also contributing to negative environmental impacts. On an annual casting production of 95 million tonnes, assuming that on average 5% defective castings are produced with a production cost of 1.2ā‚¬ per kg for ferrous alloys, the foundry industry is losing 5.7 billion ā‚¬, producing landfill waste well in excess of two million tonnes and releasing just under two million tonnes of CO2 emissions. Foundries have vast proportion of knowledge that is waiting to be tapped, documented, shared and reused in order to realise the saving potential of 5.7 billion ā‚¬ per year. This ambitious goal can only be achieved by developing effective knowledge management strategies to create, retain and re-use foundry and product specific process knowledge whilst supporting a smart and sustainable growth strategy. This is the focus of 7Epsilon (7Īµ), an innovative methodology led by Swansea University along with a consortium of European universities and research organisations. At the core of 7Īµ capabilities is casting process optimisation which is defined as a methodology of using existing casting process knowledge to discover new process knowledge by studying patterns in data 1. According to the 7Īµ terminology, casting process knowledge is actionable information in the form of a list of measurable factors and their optimal ranges to achieve a desired business goal 1, 2. In this paper a penalty matrix approach is described for discovering main effects and interactions among process factors and responses by analysing data collected during a stable casting process. Through a practical cases study it is shown how this technique can be used as an effective tool in the root cause analysis of nonconforming products in the implementation of ISO9001:2008 requirements for continual improvement. In addition some practical aspects concerning the development of a knowledge management repository to store and retrieve foundry process knowledge are discussed. A template to document and structure foundry and product specific process knowledge is proposed so that knowledge can be stored and retrieved more efficiently by process engineers and managers with the final aim to improve process operations and reduce defects rates, taking a significant step towards achieving zero defect manufacturing

    Seven Steps to Energy Efficiency for Foundries

    Get PDF
    Steve Robinsons of American Foundrymen Society has once argued that foundries with 4% profit margin need to find new sales revenue of US1MilliontogenerateUS1 Million to generate US40,000 operating profits. The case study presented in this paper highlights how an in-process quality improvement exercise resulted in annual saving of US$144,000 by studying in-process data in a melt room on 25 process inputs. Foundry is an energy intensive industry. Energy costs for foundries are around 15% of the cost of castings. In recent years foundries have become energy aware and many have installed energy meters with on-line energy monitoring systems to report energy consumption (kWh) per tonne, charge or furnace with varying sampling frequency. This paper highlights how 7 Steps of 7Epsilon were implemented and in-process data for a foundry was visualised using penalty matrices to discover energy saving opportunities. With ISO 9001:2015 on the horizon there is an urgent need to change the foundry culture - across the world - towards capturing, storing, reusing in-process data as well as organisational knowledge in order to demonstrate in-process quality improvement. The 7Epsilon approach offers a structured methodology for organizational knowledge management as well as in-process quality improvement

    Manufacturing Process Causal Knowledge Discovery using a Modified Random Forest-based Predictive Model

    Get PDF
    A Modified Random Forest algorithm (MRF)-based predictive model is proposed for use in man-ufacturing processes to estimate the eĖ™ects of several potential interventions, such as (i) altering the operating ranges of selected continuous process parameters within specified tolerance limits,(ii) choosing particular categories of discrete process parameters, or (iii) choosing combinations of both types of process parameters. The model introduces a non-linear approach to defining the most critical process inputs by scoring the contribution made by each process input to the process output prediction power. It uses this contribution to discover optimal operating ranges for the continuous process parameters and/or optimal categories for discrete process parameters. The set of values used for the process inputs was generated from operating ranges identified using a novel Decision Path Search (DPS) algorithm and Bootstrap sampling.The odds ratio is the ratio between the occurrence probabilities of desired and undesired process output values. The eĖ™ect of potential interventions, or of proposed confirmation trials, are quantified as posterior odds and used to calculate conditional probability distributions. The advantages of this approach are discussed in comparison to fitting these probability distributions to Bayesian Networks (BN).The proposed explainable data-driven predictive model is scalable to a large number of process factors with non-linear dependence on one or more process responses. It allows the discovery of data-driven process improvement opportunities that involve minimal interaction with domain expertise. An iterative Random Forest algorithm is proposed to predict the missing values for the mixed dataset (continuous and categorical process parameters). It is shown that the algorithm is robust even at high proportions of missing values in the dataset.The number of observations available in manufacturing process datasets is generally low, e.g. of a similar order of magnitude to the number of process parameters. Hence, Neural Network (NN)-based deep learning methods are generally not applicable, as these techniques require 50-100 times more observations than input factors (process parameters).The results are verified on a number of benchmark examples with datasets published in the lit-erature. The results demonstrate that the proposed method outperforms the comparison approaches in term of accuracy and causality, with linearity assumed. Furthermore, the computational cost is both far better and very feasible for heterogeneous datasets

    A novel imputation based predictive algorithm for reducing common cause variation from small and mixed datasets with missing values

    Get PDF
    Most process control algorithms need a predetermined target value as an input for a process variable so that the deviation is observed and minimized. In this paper, a novel machine learning algorithm is proposed that has an ability to not only suggest new target values for both categorical and continuous variables to minimize process output variation but also predict the extent to which the variation can be minimized.In foundry processes, an average rejection rate of 3%ā€“5% within batches of castings produced is considered as acceptable and is considered as an effect of the common cause variation. As a result, the operating range for process input values is often not changed during the root cause analysis. The relevant available historical process data is normally limited with missing values and it combines both categorical and continuous variables (mixed dataset). However, technological advancements manufacturing processes provide opportunities to further refine process inputs in order to minimize undesired variation in process outputs.A new linear regression based algorithm is proposed to achieve lower prediction error in comparison to the commonly used linear factor analysis for mixed data (FAMD) method. This algorithm is further coupled with a novel missing data algorithm to predict the process response values corresponding to a given set of values for process inputs. This enabled the novel imputation based predictive algorithm to quantify the effect of a confirmation trial based on the proposed changes in the operating ranges of one or more process inputs. A set of values for optimal process inputs is generated from operating ranges discovered by a recently proposed quality correlation algorithm (QCA) using a Bootstrap sampling method. The odds ratio, which represents a ratio between the probability of occurrence of desired and undesired process output values, is used to quantify the effect of a confirmation trial.The limitations of the underlying PCA based linear model have been discussed and the future research areas have been identified

    Risk based uncertainty quantification to improve robustness of manufacturing operations

    Get PDF
    The cyber-physical systems of Industry 4.0 are expected to generate vast amount of in-process data and revolutionise the way data, knowledge and wisdom is captured and reused in manufacturing industries. The goal is to increase profits by dramatically reducing the occurrence of unexpected process results and waste. ISO9001:2015 defines risk as effect of uncertainty. In the 7Epsilon context, the risk is defined as effect of uncertainty on expected results. The paper proposes a novel algorithm to embed risk based thinking in quantifying uncertainty in manufacturing operations during the tolerance synthesis process. This method uses penalty functions to mathematically represent deviation from expected results and solves the tolerance synthesis problem by proposing a quantile regression tree approach. The latter involves non parametric estimation of conditional quantiles of a response variable from in-process data and allows process engineers to discover and visualise optimal ranges that are associated with quality improvements. In order to quantify uncertainty and predict process robustness, a probabilistic approach, based on the likelihood ratio test with bootstrapping, is proposed which uses smoothed probability estimation of conditional probabilities. The mathematical formulation presented in this paper will allow organisations to extend Six Sigma process improvement principles in the Industry 4.0 context and implement the 7 steps of 7Epsilon in order to satisfy the requirements of clauses 6.1 and 7.1.6 of the ISO9001:2015 and the aerospace AS9100:2016 quality standard

    Organisational Knowledge Management for Defect Reduction and Sustainable Development in Foundries

    Get PDF
    Despite many advances in the field of casting technologies the foundry industry still incurs significant lossesdue to the cost of scrap and rework with adverse effects on profitability and the environment. Approachessuch as Six Sigma, DoE, FMEA are used by foundries to address quality issues. However these approacheslack support to manage the heterogeneous knowledge created during process improvement activities. Theproposed revision of ISO9001:2015 quality standard puts emphasis on retaining organisational knowledgeand its continual use in process improvement (ISO, 2014). In this paper a novel framework for creation,storage and reuse of product specific process knowledge is presented. The framework is reviewed taking intoconsideration theoretical perspectives of organisational knowledge management as well as addressing thechallenges concerning its practical implementation. A knowledge repository concept is introduced to demonstratehow organisational knowledge can be effectively stored and reused for achieving continual processimprovement and sustainable development

    A bootstrap method for uncertainty estimation in quality correlation algorithm for risk based tolerance synthesis

    Get PDF
    A risk based tolerance synthesis approach is based on ISO9001:2015 quality standardā€™s risk based thinking. It analyses in-process data to discover correlations among regions of input data scatter and desired or undesired process outputs. Recently, Ransing, Batbooti, Giannetti, and Ransing (2016) proposed a quality correlation algorithm (QCA) for risk based tolerance synthesis. The quality correlation algorithm is based on the principal component analysis (PCA) and a co-linearity index concept (Ransing, Giannetti, Ransing, & James 2013). The uncertainty in QCA results on mixed data sets is quantified and analysed in this paper.The uncertainty is quantified using a bootstrap sampling method with bias-corrected and accelerated confidence intervals. The co-linearity indices use the length and cosine angles of loading vectors in a p-dimensional space. The uncertainty for all p-loading vectors is shown in a single co-linearity index plot and is used to quantify the uncertainty in predicting optimal tolerance limits. The effects of re-sampling distributions are analysed. The QCA tolerance limits are revised after estimating the uncertainty in limits via bootstrap sampling. The proposed approach has been demonstrated by analysing in-process data from a previously published case study

    SeleĆ§Ć£o de variĆ”veis aplicada ao controle estatĆ­stico multivariado de processos em bateladas

    Get PDF
    A presente tese apresenta proposiƧƵes para o uso da seleĆ§Ć£o de variĆ”veis no aprimoramento do controle estatĆ­stico de processos multivariados (MSPC) em bateladas, a fim de contribuir com a melhoria da qualidade de processos industriais. Dessa forma, os objetivos desta tese sĆ£o: (i) identificar as limitaƧƵes encontradas pelos mĆ©todos MSPC no monitoramento de processos industriais; (ii) entender como mĆ©todos de seleĆ§Ć£o de variĆ”veis sĆ£o integrados para promover a melhoria do monitoramento de processos de elevada dimensionalidade; (iii) discutir sobre mĆ©todos para alinhamento e sincronizaĆ§Ć£o de bateladas aplicados a processos com diferentes duraƧƵes; (iv) definir o mĆ©todo de alinhamento e sincronizaĆ§Ć£o mais adequado para o tratamento de dados de bateladas, visando aprimorar a construĆ§Ć£o do modelo de monitoramento na Fase I do controle estatĆ­stico de processo; (v) propor a seleĆ§Ć£o de variĆ”veis, com propĆ³sito de classificaĆ§Ć£o, prĆ©via Ć  construĆ§Ć£o das cartas de controle multivariadas (CCM) baseadas na anĆ”lise de componentes principais (PCA) para monitorar um processo em bateladas; e (vi) validar o desempenho de detecĆ§Ć£o de falhas da carta de controle multivariada proposta em comparaĆ§Ć£o Ć s cartas tradicionais e baseadas em PCA. O desempenho do mĆ©todo proposto foi avaliado mediante aplicaĆ§Ć£o em um estudo de caso com dados reais de um processo industrial alimentĆ­cio. Os resultados obtidos demonstraram que a realizaĆ§Ć£o de uma seleĆ§Ć£o de variĆ”veis prĆ©via Ć  construĆ§Ć£o das CCM contribuiu para reduzir eficientemente o nĆŗmero de variĆ”veis a serem analisadas e superar as limitaƧƵes encontradas na detecĆ§Ć£o de falhas quando bancos de elevada dimensionalidade sĆ£o monitorados. Conclui-se que, ao possibilitar que CCM, amplamente utilizadas no meio industrial, sejam adequadas para banco de dados reais de elevada dimensionalidade, o mĆ©todo proposto agrega inovaĆ§Ć£o Ć  Ć”rea de monitoramento de processos em bateladas e contribui para a geraĆ§Ć£o de produtos de elevado padrĆ£o de qualidade.This dissertation presents propositions for the use of variable selection in the improvement of multivariate statistical process control (MSPC) of batch processes, in order to contribute to the enhacement of industrial processesā€™ quality. There are six objectives: (i) identify MSPC limitations in industrial processes monitoring; (ii) understand how methods of variable selection are used to improve high dimensional processes monitoring; (iii) discuss about methods for alignment and synchronization of batches with different durations; (iv) define the most adequate alignment and synchronization method for batch data treatment, aiming to improve Phase I of process monitoring; (v) propose variable selection for classification prior to establishing multivariate control charts (MCC) based on principal component analysis (PCA) to monitor a batch process; and (vi) validate fault detection performance of the proposed MCC in comparison with traditional PCA-based and charts. The performance of the proposed method was evaluated in a case study using real data from an industrial food process. Results showed that performing variable selection prior to establishing MCC contributed to efficiently reduce the number of variables and overcome limitations found in fault detection when high dimensional datasets are monitored. We conclude that by improving control charts widely used in industry to accomodate high dimensional datasets the proposed method adds innovation to the area of batch process monitoring and contributes to the generation of high quality standard products
    • ā€¦
    corecore