407 research outputs found

    Study of Coefficient of Friction and Springback Analysis of Brass in Bendingat Elevated Temperature Conditions

    Get PDF
    In the present work, finite element analysis is carried out for the minimization of springback in theV-bending process for high-strength brass sheet metal. Firstly, the uniaxial tensile test is conducted to determine the variousmaterial properties required for finite element analysis. The various test parameters considered in the V-Bending process aretemperature (573 K, 673 K and 773K), punch speeds (1 mm/min, 5 mm/min and 10 mm/min), holding time (30 s, 60 s and90 s) and sheet orientation concerning rolling direction RD (00), ND (450) and TD (900) for finite element analysis. Thebending under tension test is used to determine the coefficient of friction at different temperatures and lubricationconditions, and these values are implemented in finite element simulations of the V-bending process. Taguchi analysis iscarried out to determine springback of high-strength brass alloy by selecting four control factors (temperature, punch speed,holding time, and orientation). From the analysis of the signal-to-noise (S/N) ratio, it is reported that the temperature(46.93%) is the most significant parameter which influences the springback followed by holding time (26.29%), sheetorientation (24.07%) and punch speed (2.69%). The optimal set obtained for the minimum springback of brass alloy and theconformation test is performed at the optimum set conditions (773 K temperature, 1 mm/min punch speed, 90 s holdingtime, and 90° to the rolling direction of a sheet). With the optimal set of process parameters, Springback decreasedsignificantly to around 68.68%. Through the investigation of springback analysis, it is directly proportional to thetemperature and holding time and inversely proportional to the punch speed, but sheet orientation doesn’t follow any trends

    A Data Driven Sequential Learning Framework to Accelerate and Optimize Multi-Objective Manufacturing Decisions

    Full text link
    Manufacturing advanced materials and products with a specific property or combination of properties is often warranted. To achieve that it is crucial to find out the optimum recipe or processing conditions that can generate the ideal combination of these properties. Most of the time, a sufficient number of experiments are needed to generate a Pareto front. However, manufacturing experiments are usually costly and even conducting a single experiment can be a time-consuming process. So, it's critical to determine the optimal location for data collection to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This paper presents a novel data-driven Bayesian optimization framework that utilizes sequential learning to efficiently optimize complex systems with multiple conflicting objectives. Additionally, this paper proposes a novel metric for evaluating multi-objective data-driven optimization approaches. This metric considers both the quality of the Pareto front and the amount of data used to generate it. The proposed framework is particularly beneficial in practical applications where acquiring data can be expensive and resource intensive. To demonstrate the effectiveness of the proposed algorithm and metric, the algorithm is evaluated on a manufacturing dataset. The results indicate that the proposed algorithm can achieve the actual Pareto front while processing significantly less data. It implies that the proposed data-driven framework can lead to similar manufacturing decisions with reduced costs and time

    Accelerating Manufacturing Decisions using Bayesian Optimization: An Optimization and Prediction Perspective

    Get PDF
    Manufacturing is a promising technique for producing complex and custom-made parts with a high degree of precision. It can also provide us with desired materials and products with specified properties. To achieve that, it is crucial to find out the optimum point of process parameters that have a significant impact on the properties and quality of the final product. Unfortunately, optimizing these parameters can be challenging due to the complex and nonlinear nature of the underlying process, which becomes more complicated when there are conflicting objectives, sometimes with multiple goals. Furthermore, experiments are usually costly, time-consuming, and require expensive materials, man, and machine hours. So, each experiment is valuable and it\u27s critical to determine the optimal experiment location to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This thesis presents a multi-objective Bayesian optimization framework to find out the optimum processing conditions for a manufacturing setup. It uses an acquisition function to collect data points sequentially and iteratively update its understanding of the underlying design space utilizing a Gaussian Process-based surrogate model. In manufacturing processes, the focus is often on obtaining a rough understanding of the design space using minimal experimentation, rather than finding the optimal parameters. This falls under the category of approximating the underlying function rather than design optimization. This approach can provide material scientists or manufacturing engineers with a comprehensive view of the entire design space, increasing the likelihood of making discoveries or making robust decisions. However, a precise and reliable prediction model is necessary for a good approximation. To meet this requirement, this thesis proposes an epsilon-greedy sequential prediction framework that is distinct from the optimization framework. The data acquisition strategy has been refined to balance exploration and exploitation, and a threshold has been established to determine when to switch between the two. The performance of this proposed optimization and prediction framework is evaluated using real-life datasets against the traditional design of experiments. The proposed frameworks have generated effective optimization and prediction results using fewer experiments

    A statistical study of the effect of manufacturing methods on the mechanical properties of high-density polyethylene/layered double hydroxide composites

    Get PDF
    Thesis (PhD (Mechanical Engineering))--University of Pretoria, 2023.Polymer-clay composites have applications in numerous sectors such as packaging, automotive manufacturing and agriculture. A primary focus of polymer composite research is to improve the performance of these composites while also reducing costs. Adding clay to the polymer can enhance the strength and stiffness of the composite. However, adding too much clay can degrade the ductility, hence reducing the usefulness of the material system. In historical exploratory studies of polymer-clay composites, conducted at the University of Pretoria, it was observed that the mechanical properties were not enhanced as expected by the addition of clay. In fact, the variability observed in the mechanical properties was greater than the effect of increasing the clay weight loading. This could possibly be attributed to manufacturing methods. If polymer-clay composite properties are much more sensitive to manufacturing methods than has been recognised, this is concerning, since bulk manufacturing processes will generally be less tightly controlled than laboratory investigations. By gaining more insight and understanding into the effects of manufacturing variation on the final composite properties it is possible to reduce the issues which inevitably arise when scaling a manufacturing process from a laboratory to an industrial setting. This study therefore aims to investigate the effects of different manufacturing methods on the statistical variation of the mechanical properties of polymer-clay composites. The material system studied is high-density polyethylene (HDPE) filled with layered double hydroxide (LDH), a synthetic clay. A multi-site collaborative study between University of Pretoria (UP, South Africa), Tshwane University of Technology (TUT, South Africa) and Leibniz Institute for Polymer Research (IPF, Germany) was designed. This study is fundamentally interdisciplinary, combining knowledge from polymer materials science, manufacturing, mechanical characterisation and statistics. A statistical design of experiments was developed using the insights gained from a statistical analysis of historical data and from an in-depth systematic literature review on the effects of manufacturing variation on mechanical properties of HDPE-clay composites. The systematic literature review followed Preferred Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Statistical design of experiments is a robust method to draw reasonable conclusions about the influence of multiple contributing factors on experimental results. The design considered the influence of manufacturing method (compression and injection moulding) and site (UP, TUT and IPF) in addition to the clay type and clay weight loading material parameters. Due to the limited mould availability at the South African site, a sub-study considering the influence of the injection moulded tensile sample mould type was also included. A statistical analysis of the experimental results indicated that the moulding method, sample mould type and site (i.e., machine variation) had a larger effect on the mechanical properties then the clay type and clay loading. The effect due to moulding method was expected as it has been documented in literature. The influence of site and sample mould type are important results from this study. The same processing conditions were used at the different sites for both the compression moulding and injection moulding comparisons. The core manufacturing process should not change even when the equipment used is not exactly the same. However, the experimental results demonstrated significant variability as a result of compression moulding on different equipment. The influence of the tensile sample mould type was not expected as the mechanical properties are normalised to the sample geometry, and the comparison was between ISO standard moulds. This raises concerns about the validity of applying experimental test results to predict bulk material performance. This thesis has therefore demonstrated the importance of the consideration of manufacturing variability in studies of polymer-clay composites.NoneMechanical and Aeronautical EngineeringPhD (Mechanical Engineering)Unrestricte

    Otimização robusta multivariada no fresamento de topo do aço inoxidável duplex UNS S32205

    Get PDF
    Duplex stainless steel pertains to a class of materials with low machinability due to its right rate of hardening, low thermal conductivity and high ductility. This characteristic represents a significant challenge in the manufacture of components, especially in the end milling process. Optimization is a viable alternative to determine the best process parameters and obtain higher production values with sustainability and quality. The presence of noise variables is an additional complicating factor during material machining of this material, and their presence causes an increase in variability during the process, and their effect can be mitigated by employing robust modelling methods. This thesis presents the robust multivariate optimization in the end milling of duplex stainless steel UNS S32205. The tests were carried out using a central composite design combining the input variables (cutting speed, feed per tooth, milled width and depth of cut) and the noise variables (tool flank wear, fluid flow and overhang length). The concept of robust parameter design, response surface methodology, factor analysis, optimization of the multivariate mean square error for robust factors and the normal boundary intersection were applied. The combination of all these methodologies gave rise to the EQMMFR-NBI method. As a result of the factor analysis, the response variables were grouped into 3 latent variables, the first referring to the roughness Ra, Rq, Rt and Rz (quality indicator); the second to the electricity consumption and CO2 emissions (sustainability indicator) and the third to the material removal rate (productivity indicator). Multivariate robust optimization was performed considering sustainability and productivity indicators, while quality was used as a constraint to the nonlinear optimization problem. By applying the EQMMFR-NBI method, Pareto optimal solutions were obtained and an equispaced frontier was constructed. Confirmation tests were performed using Taguchi's L9 arrangement. The results showed that the optimal setups found were able to neutralize the influence of noise variables on the response variables, proving the good adequacy of proposal and the application of the method.O aço inoxidável duplex pertence a uma classe de materiais com baixa usinabilidade por apresentar alta taxa de encruamento; baixa condutividade térmica e alta ductilidade, o que representa um grande desafio na fabricação de componentes, principalmente no processo de fresamento de topo. A otimização é uma alternativa viável para determinar os melhores parâmetros do processo e obter maiores valores de produtividade com sustentabilidade e qualidade. A presença de variáveis de ruído é um complicador adicional durante a usinagem desse material. Sua presença provoca um aumento de variabilidade durante o processo, porém, seu efeito pode ser atenuado ao se empregar métodos de modelagem robusta. Esta tese tem como objetivo apresentar a otimização robusta multivariada no fresamento de topo do aço inoxidável duplex UNS S32205. Os ensaios foram realizados seguindo um planejamento composto central combinando as variáveis de entrada (velocidade de corte, avanço por dente, largura fresada e profundidade de corte) e as variáveis de ruído (desgaste do flanco da ferramenta, vazão de fluido e comprimento em balanço). Foi aplicado o conceito de projeto parâmetro robusto, metodologia de superfície de resposta, análise fatorial, a otimização do erro quadrático médio multivariado para fatores robustos e o método da interseção normal à fronteira. A combinação de todas essas metodologias deu origem ao método EQMMFR-NBI. Como resultado da análise fatorial, as variáveis de resposta foram agrupadas em 3 variáveis latentes, sendo a primeira referente às rugosidades Ra, Rq, Rt e Rz (indicador de qualidade); a segunda ao consumo de energia elétrica e emissão de CO2 (indicador de sustentabilidade) e a terceira à taxa de remoção de material (indicador de produtividade). A otimização robusta multivariada foi realizada considerando os indicadores de sustentabilidade e produtividade, enquanto o de qualidade foi usado como restrição ao problema de otimização não linear. Ao aplicar o método EQMMFR-NBI, soluções ótimas de Pareto foram obtidas e uma fronteira equiespaçada foi construída. Os ensaios de confirmação foram realizados utilizando o arranjo L9 de Taguchi. Os resultados mostraram que os setups ótimos encontrados foram capazes de neutralizar a influência das variáveis de ruído nas variáveis de resposta, comprovando uma boa adequação da proposta e da aplicação do método

    Towards a Digital Twin for Individualized Manufacturing of Welded Aerospace Structures

    Get PDF
    The aerospace industry is constantly striving towards lower fuel consumption while maintaining a high standard with regards to safety and reliability. These increasing demands require the development of new methods and strategies for efficient and precise manufacturing processes. One way of achieving this goal is fabrication, an approach where components are built by joining multiple small parts into an assembly. This brings many advantages such as more flexibility in product design, however it also adds geometrical variation to the manufacturing process which needs to be managed. Since the parts in the assembly are produced separate from each other before being joined together, issues can occur related to how these parts fit together. If a single part in the assembly deviates slightly from its intended shape, this deviation may propagate in the assembly. It may also stack with deviations in other parts. This can sometimes be difficult to predict and manage using existing manufacturing tools developed within the fields of geometry assurance and robust design.The traditional approach to managing geometrical variation is usually based on making statistical assumptions about the variation that is going to occur in the manufacturing chain. With rising complexity in product design and increasingly tight tolerances, the traditional geometry assurance approach may not be sufficient to guarantee the high geometrical quality required from the final product. Individualized manufacturing has previously been proposed as a way of increasing the precision and reliability of a production process by treating each product individually based on its unique properties. This can be achieved with a digital twin, an emerging technology which works by creating a virtual copy of a physical process. The work presented in this thesis is directed towards realizing a digital twin for fabricated aerospace components. The first contribution is a framework describing how a digital twin could be implemented into a typical fabrication process within the aerospace industry. Since fabrication makes heavy use of welding to join multiple parts, welding simulation is an important component in this implementation. The digital twin also needs to manage measurement data collected from the parts on the assembly line, and this data should be considered within the welding simulation. The result of this simulation is then used to adapt and adjust the manufacturing process according to the conditions that have been measured and analyzed. An analysis loop is proposed in this thesis for realizing the functionality of the digital twin. A case study is conducted to evaluate the precision of the proposed analysis loop by comparing its predictions to a real welded assembly. The results of the case study show that the predictive precision of the proposed method beats the accuracy of a traditional, nominal prediction. This is an important first step towards the completion and future implementation of a digital twin for welded assemblies

    Hybrid Taguchi-GRA-CRITIC Optimization Method for Multi-Response Optimization of Micro-EDM Drilling Process Parameters

    Get PDF
    In this study, an attempt is made to investigate how the operational parameters such as capacitance, voltage, feed rate, and rotating speed affect the material removal rate, tool wear, overcut, and taper angle for micro-EDM drilling of aluminium 6061 utilizing brass C360 electrode. A novel Taguchi-GRA-CRITIC hybrid optimization methodology is used to obtain the optimal combination of micro-EDM drilling process parameters. The experiment was designed using the Taguchi L18 orthogonal array, and responses were recorded for each experiment. Grey Relational Analysis (GRA) is applied to improve the multi-response of the planned experiment. The weighting values corresponding to various responses are determined using CRITIC (criterion importance through intercriteria correlation) analysis. The hybrid methodology determines the best combination of process parameters for different responses. ANOVA was used to discover the most critical parameters. Finally, confirmation experiments were conducted with optimal parameters to identify improvement in grey relational grade over the initial parameters. The study\u27s findings indicate that, compared to the initial process parameter setting, the grey relational grade (GRG) increased by 92.36% with the optimal parameter setting

    Bespoke Nanoparticle Synthesis and Chemical Knowledge Discovery Via Autonomous Experimentations

    Full text link
    The optimization of nanomaterial synthesis using numerous synthetic variables is considered to be extremely laborious task because the conventional combinatorial explorations are prohibitively expensive. In this work, we report an autonomous experimentation platform developed for the bespoke design of nanoparticles (NPs) with targeted optical properties. This platform operates in a closed-loop manner between a batch synthesis module of NPs and a UV- Vis spectroscopy module, based on the feedback of the AI optimization modeling. With silver (Ag) NPs as a representative example, we demonstrate that the Bayesian optimizer implemented with the early stopping criterion can efficiently produce Ag NPs precisely possessing the desired absorption spectra within only 200 iterations (when optimizing among five synthetic reagents). In addition to the outstanding material developmental efficiency, the analysis of synthetic variables further reveals a novel chemistry involving the effects of citrate in Ag NP synthesis. The amount of citrate is a key to controlling the competitions between spherical and plate-shaped NPs and, as a result, affects the shapes of the absorption spectra as well. Our study highlights both capabilities of the platform to enhance search efficiencies and to provide a novel chemical knowledge by analyzing datasets accumulated from the autonomous experimentations

    Identification of the Most Significant Parameter for Optimizing the Performance of RPL Routing Protocol in IoT Using Taguchi Design of Experiments

    Full text link
    Internet of Things (IoT) consists of a wide variety of devices with limited power sources. Due to the adhered reason, energy consumption is considered as one of the major challenges in the IoT environment. In this research article, an attempt is made to optimize the existing Routing Protocol (RPL) towards a green technology. It focuses on finding the most significant parameter in the RPL using Taguchi Design of Experiments. It emphasizes the effects of five input factors, such as Network Size, Mobility Speed, DIO_DOUBLING, DIO_MIN_INTERVAL, and Redundancy Constant on only one output parameter Power Consumption. The findings show that DIO_MIN_INTERVAL is the leading factor that has a significant effect on the power consumption in RPL. After determining the most significant factor that affects the power consumption, measures can be taken to optimize the performance of RPL by applying some optimization techniques. COOJA simulator is used to carry out the simulations required for this research article.Comment: 12 Page
    corecore