2,032 research outputs found

    Optimisation of welding parameters to mitigate the effect of residual stress on the fatigue life of nozzle–shell welded joints in cylindrical pressure vessels.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.The process of welding steel structures inadvertently causes residual stress as a result of thermal cycles that the material is subjected to. These welding-induced residual stresses have been shown to be responsible for a number of catastrophic failures in critical infrastructure installations such as pressure vessels, ship’s hulls, steel roof structures, and others. The present study examines the relationship between welding input parameters and the resultant residual stress, fatigue properties, weld bead geometry and mechanical properties of welded carbon steel pressure vessels. The study focuses on circumferential nozzle-to-shell welds, which have not been studied to this extent until now. A hybrid methodology including experimentation, numerical analysis, and mathematical modelling is employed to map out the relationship between welding input parameters and the output weld characteristics in order to further optimize the input parameters to produce an optimal welded joint whose stress and fatigue characteristics enhance service life of the welded structure. The results of a series of experiments performed show that the mechanical properties such as hardness are significantly affected by the welding process parameters and thereby affect the service life of a welded pressure vessel. The weld geometry is also affected by the input parameters of the welding process such that bead width and bead depth will vary depending on the parametric combination of input variables. The fatigue properties of a welded pressure vessel structure are affected by the residual stress conditions of the structure. The fractional factorial design technique shows that the welding current (I) and voltage (V) are statistically significant controlling parameters in the welding process. The results of the neutron diffraction (ND) tests reveal that there is a high concentration of residual stresses close to the weld centre-line. These stresses subside with increasing distance from the centre-line. The resultant hoop residual stress distribution shows that the hoop stresses are highly tensile close to the weld centre-line, decrease in magnitude as the distance from the weld centre-line increases, then decrease back to zero before changing direction to compressive further away from the weld centre-line. The hoop stress distribution profile on the flange side is similar to that of the pipe side around the circumferential weld, and the residual stress peak values are equal to or higher than the yield strength of the filler material. The weld specimens failed at the weld toe where the hoop stress was generally highly tensile in most of the welded specimens. The multiobjective genetic algorithm is successfully used to produce a set of optimal solutions that are in agreement with values obtained during experiments. The 3D finite element model produced using MSC Marc software is generally comparable to physical experimentation. The results obtained in the present study are in agreement with similar studies reported in the literature

    Data Driven Performance Prediction in Steel Making

    Get PDF
    This work presents three data-driven models based on process data, to estimate different indicators related to process performance in a steel production process. The generated models allow the optimization of the process parameters to achieve optimal performance and quality levels. A new approach based on ensembles has been developed with feature selection methods and four state-of-the-art regression approximations (random forest, gradient boosting, xgboost and neural networks). The results show that the proposed approach makes the prediction more stable reducing the variance for all cases, even in one case, slightly reducing the bias. Furthermore, from the four machine learning paradigms presented, random forest is the one with the best results in a quantitative way, obtaining a coefficient of determination of 0.98 as a maximum, depending on the target sub-process.This research is supported by the European Union’s Horizon 2020 Research and Innovation Framework Programme [grant agreement No 723661; COCOP; http://www.cocop-spire.eu (accessed on 6 January 2022)]. The authors want to acknowledge the work of the whole COCOP consortium

    Rails Quality Data Modelling via Machine Learning-Based Paradigms

    Get PDF

    A study of environmental characterization of conventional and advanced aluminum alloys for selection and design. Phase 2: The breaking load test method

    Get PDF
    A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    Symmetric and Asymmetric Data in Solution Models

    Get PDF
    This book is a Printed Edition of the Special Issue that covers research on symmetric and asymmetric data that occur in real-life problems. We invited authors to submit their theoretical or experimental research to present engineering and economic problem solution models that deal with symmetry or asymmetry of different data types. The Special Issue gained interest in the research community and received many submissions. After rigorous scientific evaluation by editors and reviewers, seventeen papers were accepted and published. The authors proposed different solution models, mainly covering uncertain data in multicriteria decision-making (MCDM) problems as complex tools to balance the symmetry between goals, risks, and constraints to cope with the complicated problems in engineering or management. Therefore, we invite researchers interested in the topics to read the papers provided in the book

    Manufacturing Process Causal Knowledge Discovery using a Modified Random Forest-based Predictive Model

    Get PDF
    A Modified Random Forest algorithm (MRF)-based predictive model is proposed for use in man-ufacturing processes to estimate the e˙ects of several potential interventions, such as (i) altering the operating ranges of selected continuous process parameters within specified tolerance limits,(ii) choosing particular categories of discrete process parameters, or (iii) choosing combinations of both types of process parameters. The model introduces a non-linear approach to defining the most critical process inputs by scoring the contribution made by each process input to the process output prediction power. It uses this contribution to discover optimal operating ranges for the continuous process parameters and/or optimal categories for discrete process parameters. The set of values used for the process inputs was generated from operating ranges identified using a novel Decision Path Search (DPS) algorithm and Bootstrap sampling.The odds ratio is the ratio between the occurrence probabilities of desired and undesired process output values. The e˙ect of potential interventions, or of proposed confirmation trials, are quantified as posterior odds and used to calculate conditional probability distributions. The advantages of this approach are discussed in comparison to fitting these probability distributions to Bayesian Networks (BN).The proposed explainable data-driven predictive model is scalable to a large number of process factors with non-linear dependence on one or more process responses. It allows the discovery of data-driven process improvement opportunities that involve minimal interaction with domain expertise. An iterative Random Forest algorithm is proposed to predict the missing values for the mixed dataset (continuous and categorical process parameters). It is shown that the algorithm is robust even at high proportions of missing values in the dataset.The number of observations available in manufacturing process datasets is generally low, e.g. of a similar order of magnitude to the number of process parameters. Hence, Neural Network (NN)-based deep learning methods are generally not applicable, as these techniques require 50-100 times more observations than input factors (process parameters).The results are verified on a number of benchmark examples with datasets published in the lit-erature. The results demonstrate that the proposed method outperforms the comparison approaches in term of accuracy and causality, with linearity assumed. Furthermore, the computational cost is both far better and very feasible for heterogeneous datasets

    Markov chain portfolio liquidity optimization model

    Get PDF
    The international financial crisis of September 2008 and May 2010 showed the importance of liquidity as an attribute to be considered in portfolio decisions. This study proposes an optimization model based on available public data, using Markov chain and Genetic Algorithms concepts as it considers the classic duality of risk versus return and incorporating liquidity costs. The work intends to propose a multi-criterion non-linear optimization model using liquidity based on a Markov chain. The non-linear model was tested using Genetic Algorithms with twenty five Brazilian stocks from 2007 to 2009. The results suggest that this is an innovative development methodology and useful for developing an efficient and realistic financial portfolio, as it considers many attributes such as risk, return and liquidity

    Development of a heuristic methodology for designing measurement networks for precise metal accounting

    Get PDF
    This thesis investigates the development of a heuristic based methodology for designing measurement networks with application to the precise accounting of metal flows in mineral beneficiation operations. The term 'measurement network' is used to refer to the 'system of sampling and weight measurement equipment' from which process measurements are routinely collected. Metal accounting is defined as the estimation of saleable metal in the mine and subsequent process streams over a defined time period. One of the greatest challenges facing metal accounting is 'uncertainty' that is caused by random errors, and sometimes gross errors, that obtain in process measurements. While gross errors can be eliminated through correct measurement practices, random errors are an inherent property of measured data and they can only be minimised. Two types of rules for designing measurement networks were considered. The first type of rules referred to as 'expert heuristics' consists of (i) Code of Practice Guidelines from the AMIRA P754 Code, and (ii) prevailing accounting practices from the mineral and metallurgical processing industry which were obtained through a questionnaire survey campaign. It was hypothesised that experts in the industry design measurement networks using rules or guidelines that ensure requisite quality in metal accounting. The second set of rules was derived from the symbolic manipulation of the general steady-state linear data reconciliation solution as well as from an intensive numerical study on the variance reduction response of measurements after data reconciliation conducted in this study. These were referred to as 'mathematical heuristics' and are based on the general principle of variance reduction through data reconciliation. It was hypothesised that data reconciliation can be used to target variance reduction for selected measurements by exploiting characteristics of entire measurement networks as well as individual measurement characteristics
    • …
    corecore