42 research outputs found

    Development and application of process capability indices

    Get PDF
    In order to measure the performance of manufacturing processes, several process capability indices have been proposed. A process capability index (PCI) is a unitless number used to measure the ability of a process to continuously produce products that meet customer specifications. These indices have since helped practitioners understand and improve their production systems, but no single index can fully measure the performance of any observed process. Each index has its own drawbacks which can be complemented by using others. Advantages of commonly used indices in assessing different aspects of process performance have been highlighted. Quality cost is also a function of shift in mean, shift in variance and shift in yield. A hybrid is developed that complements the strengths of these individual indices and provides the set containing the smallest number of indices that gives the practitioner detailed information on the shift in mean or variance, the location of mean, yield and potential capability. It is validated that while no single index can fully assess and measure the performance of a univariate normal process, the optimal set of indices selected by the proposed hybrid can simultaneously provide precise information on the shift in mean or variance, the location of mean, yield and potential capability. A simulation study increased the process variability by 100% and then reduced by 50%. The optimal set managed to pick such a shift. The asymmetric ratio was able to detect both the 10% decrease and 20% increase in µ but did not alter significantly with a 50% decrease or a 100% increase in σ, which meant it was not sensitive to any shift in σ. The implementation of the hybrid provides the quality practitioner, or computer-aided manufacturing system, with a guideline on prioritised tasks needed to improve the process capability and reduce the cost of poor quality. The author extended the proposed hybrids to fully measure the performance of a process with multiple quality characteristics, which follow normal distribution and are correlated. Furthermore, for multivariate normal processes with correlated quality characteristics, process capability analysis is not complete without fault diagnostics. Fault diagnostics is the identification and ranking of quality characteristics responsible for multivariate process poor performance. Quality practitioners desire to identify and rank quality characteristics, responsible for poor performance, in order to prioritise resources for process quality improvement tasks thereby speeding up the process and minimising quality costs. To date, none of the existing commonly used source identification approaches can classify whether the process behaviour is caused by the shift in mean or change in variance. The author has proposed a source identification algorithm based on mean and variance impact factors to address this shortcoming. Furthermore, the author developed a novel fault diagnostic hybrid based on the proposed optimal set selection algorithm, principal component analysis, machine learning, and the proposed impact-factor. The novelty of this hybrid is that it can carry out a full multivariate process capability analysis and provides a robust tool to precisely identify and rank quality characteristics responsible for the shifts in mean, variance and yield. The fault diagnostic hybrid can guide the practitioners to identify and prioritise quality characteristics responsible for the poor process performance, thereby reducing the quality cost by effectively speeding up the multivariate process improvement tasks. Simulated scenarios have been generated to increase/decrease some components of the mean vector (µ2/µ4) and in increase/reduce the variability of some components (σ1 reduced to close to zero/σ6 multiplied by 100%). The hybrid ranked X2 and X6 as the most contributing variables to the process poor performance and X1 and X4 as the major contributors to process yield. There is a great challenge in carrying out process capability analysis and fault diagnostics on a high dimensional multivariate non-normal process, with multiple correlated quality characteristics, in a timely manner. The author has developed a multivariate non-normal fault diagnostic hybrid capable of assessing performance and perform fault diagnostics on multivariate non-normal processes. The proposed hybrid first utilizes the Geometric Distance (GD) approach, to reduce dimensionality of the correlated data into fewer number of independent GD variables which can be assessed using univariate process capability indices. This is followed by fitting Burr XII distribution to independent GD variables. The independent fitted distributions are used to estimate both yield and multivariate process capability in a time efficient way. Finally, machine learning approach, is deployed to carry out the task of fault diagnostic by identifying and ranking the correlated quality characteristics responsible for the poor performance of the least performing GD variable. The results show that the proposed hybrid is robust in estimating both yield and multivariate process capability, carrying out fault diagnostics beyond GD variables, and identifying the original characteristic responsible for poor performance. The novelty of the proposed non-normal fault diagnostic hybrid is that it considers quality characteristics related to the least performing GD variable, instead of investigating all the quality characteristics of the multivariate non-normal process. The efficacy of the proposed hybrid is assessed through a real manufacturing examples and simulated scenarios. Variables X1,, X2 and X3 shifted away from the target by 25%, 15% and 35%, respectively, and the hybrid was able to select variables X3 to be contributing the most to the corresponding geometric distance variable's poor performance

    Supplier Relationship Management Berdasarkan Indeks Proses Kapabiitas Untuk Multiple Characteristic

    Get PDF
    Kebutuhan utama untuk melakukan proses produksi dalam perusahaan manufaktur adalah bahan baku yang berkualitas, sehingga sebagian besar perusahaan manufaktur tergantung pada supplier. Manajemen hubungan dengan supplier yang dikelola dengan baik menghasilkan kepuasan pelanggan, reduksi biaya, kualitas dan pelayanan yang lebih baik dari supplier. Pengelolaan supplier dapat membahas tentang pemilihan, pengelompokan, dan pengembangan supplier. Dari beberapa kriteria pemilihan supplier, kualitas merupakan salah satu kriteria yang penting dan digunakan dalam penilaian supplier yang akan mempunyai dampak positif terhadap perusahaan manufaktur. Telah banyak penelitian tentang pemilihan supplier akan tetapi hanya berakhir pada pemilihan supplier saja, tidak ada tindak lanjut terhadap pengelolaan supplier yang dipilih atau tidak dipilih. Pada penelitian ini supplier dibagi menjadi dua kelompok berdasarkan indeks proses kapabilitas multiple characteristic. Tujuannya untuk memberikan usulan pengembangan masing-masing kelompok supplier. Jumlah supplier pada penelitian ini adalah sembilan, dengan hasil pembagiannya yaitu kelompok pertama terdiri dari supplier A, D, I, B, dan C. Sedangkan untuk kelompok kedua terdiri dari supplier E, F, H, dan G. Karakteristik kualitas pada kelompok satu lebih baik daripada kelompok dua secara keseluruhan baik pada bursting, tear strength, tensile strength, dan elongation. Selanjutnya dilakukan penyusunan program pengembangan supplier berdasarkan kelompoknya. Program yang diusulkan adalah framework pengembangan untuk kelompok supplier 1, kelompok supplier 2, dan pengelolaan supplier di masa depan. ======================================================================================================= The main requirement of manufacturing company is quality of their raw materials. Hence, mostof them depend on their suppliers. A well managed supplier relationship management results an increased customer satisfaction, reduced costs, better quality and better service from suppliers. Among supplier management includes supplier selection, supplier clustering and supplier development. Quality has been used as an important criteria in supplier selection, which provide a positive impact to manufacturer. There are lots of research in supplier selection with no further analysis for supplier development. Inthis research supplier has been grouping in two groups based in process capability index for multiple characteristic. Among nine suppliers, five suppliers are in group 1 and another four are in group 2. The quality characteristic of group 1 over group 2 for all characteristic i.e. bursting, tear strength, tensile strength and elongation. Furthermore, the development program was formulated for each group. The proposed development framework was extend for group 1, group 2 and future supplier management

    Fuzzy Sets, Fuzzy Logic and Their Applications

    Get PDF
    The present book contains 20 articles collected from amongst the 53 total submitted manuscripts for the Special Issue “Fuzzy Sets, Fuzzy Loigic and Their Applications” of the MDPI journal Mathematics. The articles, which appear in the book in the series in which they were accepted, published in Volumes 7 (2019) and 8 (2020) of the journal, cover a wide range of topics connected to the theory and applications of fuzzy systems and their extensions and generalizations. This range includes, among others, management of the uncertainty in a fuzzy environment; fuzzy assessment methods of human-machine performance; fuzzy graphs; fuzzy topological and convergence spaces; bipolar fuzzy relations; type-2 fuzzy; and intuitionistic, interval-valued, complex, picture, and Pythagorean fuzzy sets, soft sets and algebras, etc. The applications presented are oriented to finance, fuzzy analytic hierarchy, green supply chain industries, smart health practice, and hotel selection. This wide range of topics makes the book interesting for all those working in the wider area of Fuzzy sets and systems and of fuzzy logic and for those who have the proper mathematical background who wish to become familiar with recent advances in fuzzy mathematics, which has entered to almost all sectors of human life and activity

    Manufacturing Feature Recognition With 2D Convolutional Neural Networks

    Get PDF
    Feature recognition is a critical sub-discipline of CAD/CAM that focuses on the design and implementation of algorithms for automated identification of manufacturing features. The development of feature recognition methods has been active for more than two decades for academic research. However, in this domain, there are still many drawbacks that hinder its practical applications, such as lack of robustness, inability to learn, limited domain of features, and computational complexity. The most critical one is the difficulty of recognizing interacting features, which arises from the fact that feature interactions change the boundaries that are indispensable for characterizing a feature. This research presents a feature recognition method based on 2D convolutional neural networks (CNNs). First, a novel feature representation scheme based on heat kernel signature is developed. Heat Kernel Signature (HKS) is a concise and efficient pointwise shape descriptor. It can present both the topology and geometry characteristics of a 3D model. Besides informative and unambiguity, it also has advantages like robustness of topology and geometry variations, translation, rotation and scale invariance. To be inputted into CNNs, CAD models are discretized by tessellation. Then, its heat persistence map is transformed into 2D histograms by the percentage similarity clustering and node embedding techniques. A large dataset of CAD models is built by randomly sampling for training the CNN models and validating the idea. The dataset includes ten different types of isolated v features and fifteen pairs of interacting features. The results of recognizing isolated features have shown that our method has better performance than any existing ANN based approaches. Our feature recognition framework offers the advantages of learning and generalization. It is independent of feature selection and could be extended to various features without any need to redesign the algorithm. The results of recognizing interacting features indicate that the HKS feature representation scheme is effective in handling the boundary loss caused by feature interactions. The state-of-the-art performance of interacting features recognition has been improved

    Development of a machine-tooling-process integrated approach for abrasive flow machining (AFM) of difficult-to-machine materials with application to oil and gas exploration componenets

    Get PDF
    This thesis was submitted for the degree of Doctor of Engineering and awarded by Brunel UniversityAbrasive flow machining (AFM) is a non-traditional manufacturing technology used to expose a substrate to pressurised multiphase slurry, comprised of superabrasive grit suspended in a viscous, typically polymeric carrier. Extended exposure to the slurry causes material removal, where the quantity of removal is subject to complex interactions within over 40 variables. Flow is contained within boundary walls, complex in form, causing physical phenomena to alter the behaviour of the media. In setting factors and levels prior to this research, engineers had two options; embark upon a wasteful, inefficient and poor-capability trial and error process or they could attempt to relate the findings they achieve in simple geometry to complex geometry through a series of transformations, providing information that could be applied over and over. By condensing process variables into appropriate study groups, it becomes possible to quantify output while manipulating only a handful of variables. Those that remain un-manipulated are integral to the factors identified. Through factorial and response surface methodology experiment designs, data is obtained and interrogated, before feeding into a simulated replica of a simple system. Correlation with physical phenomena is sought, to identify flow conditions that drive material removal location and magnitude. This correlation is then applied to complex geometry with relative success. It is found that prediction of viscosity through computational fluid dynamics can be used to estimate as much as 94% of the edge-rounding effect on final complex geometry. Surface finish prediction is lower (~75%), but provides significant relationship to warrant further investigation. Original contributions made in this doctoral thesis include; 1) A method of utilising computational fluid dynamics (CFD) to derive a suitable process model for the productive and reproducible control of the AFM process, including identification of core physical phenomena responsible for driving erosion, 2) Comprehensive understanding of effects of B4C-loaded polydimethylsiloxane variants used to process Ti6Al4V in the AFM process, including prediction equations containing numerically-verified second order interactions (factors for grit size, grain fraction and modifier concentration), 3) Equivalent understanding of machine factors providing energy input, studying velocity, temperature and quantity. Verified predictions are made from data collected in Ti6Al4V substrate material using response surface methodology, 4) Holistic method to translating process data in control-geometry to an arbitrary geometry for industrial gain, extending to a framework for collecting new data and integrating into current knowledge, and 5) Application of methodology using research-derived CFD, applied to complex geometry proven by measured process output. As a result of this project, four publications have been made to-date – two peer-reviewed journal papers and two peer-reviewed international conference papers. Further publications will be made from June 2014 onwards.Engineering and Physical Sciences Research Council (EPSRC) and the Technology Strategy Board (TSB

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    An approach to understand network challenges of wireless sensor network in real-world environments

    Get PDF
    The demand for large-scale sensing capabilities and scalable communication networks to monitor and control entities within smart buildings have fuelled the exponential growth in Wireless Sensor Network (WSN). WSN proves to be an attractive enabler because of its accurate sensing, low installation cost and flexibility in sensor placement. While WSN offers numerous benefits, it has yet to realise its full potential due to its susceptibility to network challenges in the environment that it is deployed. Particularly, spatial challenges in the indoor environment are known to degrade WSN communication reliability and have led to poor estimations of link quality. Existing WSN solutions often generalise all link failures and tackle them as a single entity. However, under the persistent influence of spatial challenges, failing to provide precise solutions may cause further link failures and higher energy consumption of battery-powered devices. Therefore, it is crucial to identify the causes of spatial- related link failures in order to improve WSN communication reliability. This thesis investigates WSN link failures under the influence of spatial challenges in real-world indoor environments. Novel and effective strategies are developed to evaluate the WSN communication reliability. By distinguishing between spatial challenges such as a poorly deployed environment and human movements, solutions are devised to reduce link failures and improve the lifespans of energy constraint WSN nodes. In this thesis, WSN test beds using proprietary wireless sensor nodes are developed and deployed in both controlled and uncontrolled office environments. These test beds provide diverse platforms for investigation into WSN link quality. In addition, a new data extraction feature called Network Instrumentation (NI) is developed and implemented onto the communication stacks of wireless sensor nodes to collect ZigBee PRO parameters that are under the influence of environmental dynamics. To understand the relationships between WSN and Wi-Fi devices communications, an investigation on frequency spectrum sharing is conducted between IEEE 802.15.4 and IEEE 802.11 bgn standards. It is discovered that the transmission failure of WSN nodes under persistent Wi-Fi interference is largely due to channel access failure rather than corrupted packets. The findings conclude that both technologies can co- exist as long as there is sufficient frequency spacing between Wi-Fi and WSN communication and adequate operating distance between the WSN nodes, and between the WSN nodes and the Wi-Fi interference source. Adaptive Network-based Fuzzy Inference System (ANFIS) models are developed to predict spatial challenges in an indoor environment. These challenges are namely, “no failure”, “failure due to poorly deployed environment” and “failure due to human movement”. A comparison of models has found that the best-produced model represents the properties of signal strength, channel fluctuations, and communication success rates. It is recognised that the interpretability of ANFIS models have reduced due to the “curse of dimensionality”. Hence, Non-Dominated Sorting Genetic Algorithm (NSGA-II) technique is implemented to reduce the complexity of these ANFIS models. This is followed by a Fuzzy rule sensitivity analysis, where the impacts of Fuzzy rules on model accuracy are found to be dependent on factors such as communication range and controlled or uncontrolled environment. Long-term WSN routing stability is measured, taking into account the adaptability and robustness of routing paths in the real-world environments. It is found that routing stability is subjected to the implemented routing protocol, deployed environment and routing options available. More importantly, the probability of link failures can be as high as 29.9% when a next hop’s usage rate falls less than 10%. This suggests that a less dominant next hop is subjected to more link failures and is short-lived. Overall, this thesis brings together diverse WSN test beds in real-world indoor environments and a new data extraction platform to extract link quality parameters from ZigBee PRO stack for a representative assessment of WSN link quality. This produces realistic perspectives of the interactions between WSN communication reliability and the environmental dynamics, particularly spatial challenges. The outcomes of this work include an in-depth system level understanding of real-world deployed applications and an insightful measure of large-scale WSN communication performance. These findings can be used as building blocks for a reliable and sustainable network architecture built on top of resource–constrained WSN

    The optimal control of power electronic embedded networks in More Electric Aircraft

    Get PDF
    With the advancement of power electronic technologies over recent decades, there has been an overall increase in the utilisation of distributed generation and power electronic embedded networks in a large sphere of applications. Probably one of the most prominent areas of utilisation of new power electronics embedded systems is the use in power networks onboard military and civilian aircraft. With environmental concerns and increased competition in the civil aviation sector, more aircraft manufactures are replacing and interfacing electrical alternatives over heavier, less efficient and costly pneumatic, hydraulic and mechanical systems. In these modern power systems, the increased proliferation of power electronic converters and distributed generation raises important issues in regards to the performance, stability and robustness between interfaced switching units. These phenomena, such as power electronic sub-system interactions, become even more prominent in micro-grid applications or other low voltage distribution systems where interfaced converters are in close proximity to one another. In More Electric Aircraft (MEA), these interfaced power electronic converters are connected to the same non-stiff low power AC grid, which further increases the interactive effects between converter sub-systems. If these effects are not properly taken into account, then external disturbances to the system at given operating conditions can result in degradation of the system performance, failure in meeting the operating requirements of the grid, or in the worst case, instability of the whole grid. With much research in the area of decreasing the size and weight of systems, there is much literature proposing optimisation methods which decrease the size of filters between interfacing converters. Whilst effectively decreasing the size of these systems, interactions between interfaced converters gets worse, and is often improperly accounted for. The work presented in this thesis proposes a novel approach to the decentralisation and optimisation of converter controls on a power electronics embedded power network. In order to account for the interactive dynamics between sub-systems in the environment of reduced passive filter networks, all the system dynamics including the interactive terms are modelled globally. An optimal controller design approach based on the H2 optimisation is proposed to synthesise and generate automatically the controller gains for each power electronic sub-system. H2 optimisation is a powerful tool, which not only allows the submission, optimisation and development of closed loop controls for large dynamic systems, but offers the ability to the user to construct the controller for given structures. This enables the development of decentralised controllers for every sub-system with intrinsic knowledge of the closed loop dynamics of every other interconnect sub-system. It is shown through simulation and by experimental validation that this novel approach to grid control optimisation not only can improve overall dynamic performance of all sub-systems over 15traditional methods of design, but can also intrinsically reduce or better yet mitigate against the interactive effects between all converters. In addition, this method of controller design will be shown to not only be scalable to expanding sizes of grids, but the Phase-locked loops (PLLs) integrated to grid connected devices can also be considered in the optimisation procedure. PLLs are widely known to further cause interactive behaviours between grid interfaced devices. Including this into the optimisation also has been validated experimentally to prevent interactions on the grid, and improve performance over traditional design methods. Adaptations to the controller are performed to ensure operation in variable frequency environments (as is common in MEA), as well as methods of single converter optimisation when interfacing to an unknown grid. Additionally some initial research towards an adaption of the H2 controller to incorporate robustness as well as performance into the optimisation procedure is presented with mathematical concepts shown through simulation
    corecore