120 research outputs found

    A hierarchical Mamdani-type fuzzy modelling approach with new training data selection and multi-objective optimisation mechanisms: A special application for the prediction of mechanical properties of alloy steels

    Get PDF
    In this paper, a systematic data-driven fuzzy modelling methodology is proposed, which allows to construct Mamdani fuzzy models considering both accuracy (precision) and transparency (interpretability) of fuzzy systems. The new methodology employs a fast hierarchical clustering algorithm to generate an initial fuzzy model efficiently; a training data selection mechanism is developed to identify appropriate and efficient data as learning samples; a high-performance Particle Swarm Optimisation (PSO) based multi-objective optimisation mechanism is developed to further improve the fuzzy model in terms of both the structure and the parameters; and a new tolerance analysis method is proposed to derive the confidence bands relating to the final elicited models. This proposed modelling approach is evaluated using two benchmark problems and is shown to outperform other modelling approaches. Furthermore, the proposed approach is successfully applied to complex high-dimensional modelling problems for manufacturing of alloy steels, using ‘real’ industrial data. These problems concern the prediction of the mechanical properties of alloy steels by correlating them with the heat treatment process conditions as well as the weight percentages of the chemical compositions

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    Predictive modelling of the granulation process using a systems-engineering approach

    Get PDF
    © 2016 Elsevier B.V.The granulation process is considered to be a crucial operation in many industrial applications. The modelling of the granulation process is, therefore, an important step towards controlling and optimizing the downstream processes, and ensuring optimal product quality. In this research paper, a new integrated network based on Artificial Intelligence (AI) is proposed to model a high shear granulation (HSG) process. Such a network consists of two phases: in the first phase the inputs and the target outputs are used to train a number of models, where the predicted outputs from this phase and the target are used to train another model in the second phase to lead to the final predicted output. Because of the complex nature of the granulation process, the error residual is exploited further in order to improve the model performance using a Gaussian mixture model (GMM). The overall proposed network successfully predicts the properties of the granules produced by HSG, and outperforms also other modelling frameworks in terms of modelling performance and generalization capability. In addition, the error modelling using the GMM leads to a significant improvement in prediction

    Hybrid-modelling of compact tension energy in high strength pipeline steel using a Gaussian Mixture Model based error compensation

    Get PDF
    In material science studies, it is often desired to know in advance the fracture toughness of a material, which is related to the released energy during its compact tension (CT) test to prevent catastrophic failure. In this paper, two frameworks are proposed for automatic model elicitation from experimental data to predict the fracture energy released during the CT test of X100 pipeline steel. The two models including an adaptive rule-based fuzzy modelling approach and a double-loop based neural network model, relate the load, crack mouth opening displacement (CMOD) and crack length to the released energies during this test. The relationship between how fracture is propagated and the fracture energy is further investigated in greater detail. To improve the performances of the models, a Gaussian Mixture Model (GMM)-based error compensation strategy which enables one monitor the error distributions of the predicted result is integrated in the model validation stage. This can help isolate the error distribution pattern and to establish the correlations with the predictions from the deterministic models. This is the first time a data-driven approach has been used in this fashion on an application that has conventionally been handled using finite element methods or physical models

    The Effect of Metal Inert Gas Welding Parameters on the Weldability of Galvanised Steel

    Get PDF
    The Taguchi technique is employed to establish the optimal parameter for each tensile property of the weldments. The tensile properties determined are the ultimate tensile strength, the yield strength, and the percentage elongation, whereas the process parameter used is the welding current (A), welding voltage (B), and the gas flow rate (C). By applying the Taguchi method, the optimal process parameters for obtaining the weldment with better yield strength are A3B1C3, whereas A3B3C3 can produce the weldment for better elongation. These optimum process parameters have shown considerably improved signal-to-noise ratios over the current process parameters adapted by the welders

    RACFIS: A new rapid adaptive complex fuzzy inference system for regression modelling

    Get PDF
    The theory of complex fuzzy sets has made great breakthroughs in recent times. Complex fuzzy theory (CFT) allows a fuzzy set to include more information with the help of its two-dimensional rule-base, which is of great potential for improving the related fuzzy system performance while managing the size of the associated rule-bases. In this paper, a new rapid adaptive complex fuzzy inference system (RACFIS) is proposed by redesigning the optimization policy of the earlier complex neuro-fuzzy system (CNFS) algorithm. Improvements include a new three-parameter Quasi-hyperbolic momentum (QHM) optimization method to replace the original particle swarm optimization (PSO), and unsupervised learning is introduced, for the first time, into the complex neuro-fuzzy model to pre-train the antecedent parameters for a better global optimum as well as a faster convergence. Experimental results show that RACFIS performs very well on all data sets, obtaining excellent prediction accuracies with on average 10 times lower epoch numbers (as compared with all benchmark models) and a reduction in the size of the rule-bases by nearly 20 % ∼30% (as compared with non-complex fuzzy models). RACFIS also possesses a strong generalization capability that outperforms all the benchmark algorithms employed in the simulation experiments. In addition, a mean impact value (MIV) algorithm based on radial basis function (RBF) network is also employed to select variables with higher relevance in order to mitigate the drawbacks caused by the higher dimensionality of the data

    Institute of Safety Research, Annual Report 1994

    Get PDF
    The report gives an overview on the scientific work of the Institute of Safety Research in 1994

    Predicting the Future

    Get PDF
    Due to the increased capabilities of microprocessors and the advent of graphics processing units (GPUs) in recent decades, the use of machine learning methodologies has become popular in many fields of science and technology. This fact, together with the availability of large amounts of information, has meant that machine learning and Big Data have an important presence in the field of Energy. This Special Issue entitled “Predicting the Future—Big Data and Machine Learning” is focused on applications of machine learning methodologies in the field of energy. Topics include but are not limited to the following: big data architectures of power supply systems, energy-saving and efficiency models, environmental effects of energy consumption, prediction of occupational health and safety outcomes in the energy industry, price forecast prediction of raw materials, and energy management of smart buildings

    Optimization of different welding processes using statistical and numerical approaches – A reference guide

    Get PDF
    Welding input parameters play a very significant role in determining the quality of a weld joint. The joint quality can be defined in terms of properties such as weld-bead geometry, mechanical properties, and distortion. Generally, all welding processes are used with the aim of obtaining a welded joint with the desired weld-bead parameters, excellent mechanical properties with minimum distortion. Nowadays, application of design of experiment (DoE), evolutionary algorithms and computational network are widely used to develop a mathematical relationship between the welding process input parameters and the output variables of the weld joint in order to determine the welding input parameters that lead to the desired weld quality. A comprehensive literature review of the application of these methods in the area of welding has been introduced herein. This review was classified according to the output features of the weld, i.e. bead geometry and mechanical properties of the welds
    corecore