9 research outputs found
A probabilistic framework for source localization in anisotropic composite using transfer learning based multi-fidelity physics informed neural network (mfPINN)
The practical application of data-driven frameworks like deep neural network in acoustic emission (AE) source localization is impeded due to the collection of significant clean data from the
field. The utility of the such framework is governed by data collected from the site and/or laboratory experiment. The noise, experimental cost and time consuming in the collection of data
further worsen the scenario. To address the issue, this work proposes to use a novel multi-fidelity
physics-informed neural network (mfPINN). The proposed framework is best suited for the
problems like AE source detection, where the governing physics is known in an approximate sense
(low-fidelity model), and one has access to only sparse data measured from the experiment (highfidelity data). This work further extends the governing equation of AE source detection to the
probabilistic framework to account for the uncertainty that lies in the sensor measurement. The
mfPINN fuses the data-driven and physics-informed deep learning architectures using transfer
learning. The results obtained from the data-driven artificial neural network (ANN) and physicsinformed neural network (PINN) are also presented to illustrate the requirement of a multifidelity framework using transfer learning. In the presence of measurement uncertainties, the
proposed method is verified with an experimental procedure that contains the carbon-fiberreinforced polymer (CFRP) composite panel instrumented with a sparse array of piezoelectric
transducers. The results conclude that the proposed technique based on a probabilistic framework
can provide a reliable estimation of AE source location with confidence intervals by taking
measurement uncertainties into account
A multi-fidelity surrogate-model-assisted evolutionary algorithm for computationally expensive optimization problems
Integrating data-driven surrogate models and simulation models of different accuracies (or fidelities) in a single algorithm to address computationally expensive global optimization problems has recently attracted considerable attention. However, handling discrepancies between simulation models with multiple fidelities in global optimization is a major challenge. To address it, the two major contributions of this paper include: (1) development of a new multi-fidelity surrogate-model-based optimization framework, which substantially improves reliability and efficiency of optimization compared to many existing methods, and (2) development of a data mining method to address the discrepancy between the low- and high-fidelity simulation models. A new efficient global optimization method is then proposed, referred to as multi-fidelity Gaussian process and radial basis function-model-assisted memetic differential evolution. Its advantages are verified by mathematical benchmark problems and a real-world antenna design automation problem
An adaptive multi-fidelity optimization framework based on co-Kriging surrogate models and stochastic sampling with application to coastal aquifer management
Surrogate modelling has been used successfully to alleviate the computational burden that results from high-fidelity numerical models of seawater intrusion in simulation-optimization routines. Nevertheless, little attention has been given to multi-fidelity modelling methods to address cases where only limited runs with computationally expensive seawater intrusion models are considered affordable imposing a limiting factor for single-fidelity surrogate-based optimization as well. In this work, a new adaptive multi-fidelity optimization framework is proposed based on co-Kriging surrogate models considering two model fidelities of seawater intrusion. The methodology is tailored to the needs of solving pumping optimization problems with computationally expensive constraint functions and utilizes only small high-fidelity training datasets. Results from both hypothetical and real-world optimization problems demonstrate the efficiency and practicality of the proposed framework to provide a steep improvement of the objective function while it outperforms a comprehensive single-fidelity surrogate-based optimization method. The method can also be used to locate optimal solutions in the region of the global optimum when larger high-fidelity training datasets are available
Design optimisation of a safety relief valve to meet ASME BPVC section I performance requirements
The understanding of fluid flow behaviour within safety relief valves invariably requires knowledge of strong pressure and velocity gradients with significant levels of turbulence in three-dimensional flow environments. In the case of gas service valves - the focus of this thesis - these flows will be super-sonic with multidimensional shock formations resulting in challenging design conditions. This thesis takes advantage of the development and validation of computational fluid dynamic (CFD) techniques in recent years to reliably predict such flows and investigate how the techniques can be used to produce better performing safety valves. Historically OEMs will have relied on an experimental based design approach using feedback from test data to guide the evolution of a valve design. Unfortunately, due to the complexity of these devices this method could require much iteration. However, it is now possible to combine CFD techniques and optimisation algorithms to search for improved designs with reduced development times. To date these techniques have had limited exposure within valve design studies.
This thesis investigates the development of a numerical based design procedure by combining validated CFD models optimisation techniques to seek valve trim geometries that improve opening and closing behaviour. The approach is applied to an ASME Section VIII certified valve and seeks to modify the internal trim to satisfy the improved performance requirements stipulated in Section I of the ASME Boiler and Pressure Vessel Code.The understanding of fluid flow behaviour within safety relief valves invariably requires knowledge of strong pressure and velocity gradients with significant levels of turbulence in three-dimensional flow environments. In the case of gas service valves - the focus of this thesis - these flows will be super-sonic with multidimensional shock formations resulting in challenging design conditions. This thesis takes advantage of the development and validation of computational fluid dynamic (CFD) techniques in recent years to reliably predict such flows and investigate how the techniques can be used to produce better performing safety valves. Historically OEMs will have relied on an experimental based design approach using feedback from test data to guide the evolution of a valve design. Unfortunately, due to the complexity of these devices this method could require much iteration. However, it is now possible to combine CFD techniques and optimisation algorithms to search for improved designs with reduced development times. To date these techniques have had limited exposure within valve design studies.
This thesis investigates the development of a numerical based design procedure by combining validated CFD models optimisation techniques to seek valve trim geometries that improve opening and closing behaviour. The approach is applied to an ASME Section VIII certified valve and seeks to modify the internal trim to satisfy the improved performance requirements stipulated in Section I of the ASME Boiler and Pressure Vessel Code
Industrial machine structural componentsâ optimization and redesign
Tese de doutoramento em LĂderes para as IndĂșstrias TecnolĂłgicasO corte por laser Ă© um processo altamente flexĂvel com numerosas vantagens sobre tecnologias concorrentes.
O crescimento do mercado é revelador do seu potencial, totalizando 4300 milhÔes de dólares americanos
em 2020. O processo Ă© utilizado em muitas indĂșstrias e as tendĂȘncias atuais passam por melhorias ao nĂvel
do tempo de ciclo, qualidade, custos e exatidĂŁo.
Os materiais compĂłsitos (nomeadamente polĂmeros reforçados por fibras) apresentam propriedades
mecùnicas atrativas para vårias aplicaçÔes, incluindo a que se relaciona com o presente trabalho:
componentes de måquinas industriais. A utilização de compósitos resulta tipicamente em måquinas mais
eficientes, exatidĂŁo dimensional acrescida, melhor qualidade superficial, melhor eficiĂȘncia energĂ©tica e
menor impacto ambiental.
O principal objetivo deste trabalho é aumentar a produtividade de uma måquina de corte laser, através do
redesign de um componente crĂtico (o pĂłrtico), grande influenciador da exatidĂŁo da mĂĄquina. Pretende-se
com isto criar uma metodologia genérica capaz de auxiliar no processo de redesign de componentes
industriais. Dado que o problema lida com dois objetivos concorrentes (redução de peso e aumento de
rigidez) e com um elevado nĂșmero de variĂĄveis, a implementação de uma rotina de otimização Ă© um aspeto
central. à crucial demonstrar que o processo de otimização proposto resulta em soluçÔes efetivas. Estas
foram validadas através de anålise de elementos finitos e de validação experimental, com recurso a um
protĂłtipo Ă escala.
O algoritmo de otimização usado Ă© uma metaheurĂstica, inspirado no comportamento de grupos de animais.
Algoritmos Particle Swarm são sugeridos com sucesso para problemas de otimização semelhantes. A
otimização focou-se na espessura de cada laminado, para diferentes orientaçÔes.
A rotina de otimização resultou na definição de uma solução quase-ótima para os laminados analisados e
permitiu a redução do peso da peça em 43% relativamente à solução atual, bem como um aumento de 25%
na aceleração måxima permitida, o que se reflete na produtividade da måquina, enquanto a mesma exatidão
Ă© garantida.
A comparação entre os resultados numéricos e experimentais para os protótipos mostra uma boa
concordĂąncia, com divergĂȘncias pontuais, mas que ainda assim resultam na validação do modelo de
elementos finitos no qual se baseia a otimização.Laser cutting is a highly flexible process with numerous advantages over competing technologies. These have
ensured the growth of its market, totalling 4300 million United States dollars in 2020. Being used in many
industries, the current trends are focused on reduced lead time, increased quality standards and competitive
costs, while ensuring accuracy.
Composite materials (namely fibre reinforced polymers) present attractive mechanical properties that poses
them as advantageous for several applications, including the matter of this thesis: industrial machine
components. The use of these materials leads to machines with higher efficiency, dimensional accuracy,
surface quality, energy efficiency, and environmental impact.
The main goal of this work is to increase the productivity of a laser cutting machine through the redesign of
a critical component (gantry), also key for the overall machine accuracy. Beyond that, it is intended that this
work lays out a methodology capable of assisting in the redesign of other machine critical components. As
the problem leads with two opposing objectives (reducing weight and increasing stiffness), and with many
variables, the implementation of an optimization routine is a central aspect of the present work. It is of major
importance that the proposed optimization method leads to reliable results, demonstrated in this work by a
finite element analysis and through experimental validation, by means of a scale prototype.
The optimization algorithm selected is a metaheuristic inspired by the behaviour of swarms of animals.
Particle swarm algorithms are proven to provide good and fast results in similar optimization problems. The
optimization was performed focusing on the thickness of each laminate and on the orientations present in
these.
The optimization routine resulted in a definition of a near-optimal solution for the laminates analysed and
allowed a weight reduction of 43% regarding the current solution, as well as an increase of 25% in the
maximum allowed acceleration, which reflects on the productivity of the machine, while ensuring the same
accuracy.
The comparison between numeric and experimental testing of the prototypes shows a good agreement, with
punctual divergences, but that still validates the Finite elements upon which the optimization process is
supported.Portuguese Foundation for Science and Technology - SFRH/BD/51106/2010
Multi-Fidelity Surrogate and Reduced-Order Model-Based Microfluidic Concentration Gradient Generator Design
The microfluidic concentration gradient generator (ÎŒCGG) is an important device to generate and maintain concentration gradients (CGs) of biomolecules for understanding and controlling biological processes. However, determining the optimal operating parameters of ÎŒCGG is still a significant challenge, especially for complex CGs in cascaded networks. To tackle such a challenge, this study presents multi-fidelity surrogate and reduced-order model-based optimization methodologies for accurate and computationally efficient design of ÎŒCGGs.
The surrogate-based optimization (SBO) method is first proposed for the design optimization of ÎŒCGGs based on an efficient physics-based component model (PBCM). Various combinations of regression and correlation functions in Kriging and different adaptive sampling (infill) techniques are examined to establish the design process with refined model structures. In order to combine the simulation data from different sources with varying fidelities and computational costs for improved design efficiency and accuracy, a novel multi-fidelity surrogate-based optimization (MFSBO) method is presented. For the first time, a new computation-aware adaptive sampling strategy based on expected improvement reduction (EIR) is proposed to accelerate the convergence of MFSBO. EIR-based infill determines the data source and infill location by hypothetically interrogating the effect of samples and simulation fidelities on the reduction of the expected improvement. It also enables low-fidelity batch infills within a dynamically varying trust region to improve exploration on the fly. Subsequently, a new data sparsification technique based on the reduced design space and data filtering (RDS&DF) is investigated to eliminate redundant data and reduce the modeling time for improved optimization efficiency, hence addressing the long-standing âbig dataâ issues associated with MFSBO. RDS&DF is also combined with EIR-based infill technique, enabling both parsimony and computational awareness for MFSBO. Finally, a multi-fidelity reduced-order modeling (MFROM) method is developed to enable model reusability and completely replace the CFD simulation when different ÎŒCGGs need to be designed. The key innovation of MFROM is using the proper orthogonal decomposition to obtain the low-dimensional representation of the high-fidelity CFD data and the low-fidelity PBCM data and a kriging model to bridge the fidelity gap between them in the modal subspace, yielding compact MFROM applicable within the broad trade space. As a result, MFROM is highly compatible with GPU-enabled optimization by utilizing its massively parallelized computing threads. The excellent agreement between the designed CGs and the prescribed CGs demonstrates the unprecedented accuracy and efficiency of the proposed multi-fidelity modeling and optimization methodologies.
In conclusion, given their non-intrusive, data-driven natures, both (MF)SBO and MFROM are versatile and can serve as a new paradigm for ÎŒCGG design
Multi-Fidelity Bayesian Optimization for Efficient Materials Design
Materials design is a process of identifying compositions and structures to achieve
desirable properties. Usually, costly experiments or simulations are required to evaluate
the objective function for a design solution. Therefore, one of the major challenges is how
to reduce the cost associated with sampling and evaluating the objective. Bayesian
optimization is a new global optimization method which can increase the sampling
efficiency with the guidance of the surrogate of the objective. In this work, a new
acquisition function, called consequential improvement, is proposed for simultaneous
selection of the solution and fidelity level of sampling. With the new acquisition function,
the subsequent iteration is considered for potential selections at low-fidelity levels, because
evaluations at the highest fidelity level are usually required to provide reliable objective
values. To reduce the number of samples required to train the surrogate for molecular
design, a new recursive hierarchical similarity metric is proposed. The new similarity
metric quantifies the differences between molecules at multiple levels of hierarchy
simultaneously based on the connections between multiscale descriptions of the structures.
The new methodologies are demonstrated with simulation-based design of materials and
structures based on fully atomistic and coarse-grained molecular dynamics simulations,
and finite-element analysis. The new similarity metric is demonstrated in the design of
tactile sensors and biodegradable oligomers. The multi-fidelity Bayesian optimization
method is also illustrated with the multiscale design of a piezoelectric transducer by
concurrently optimizing the atomic composition of the aluminum titanium nitride ceramic
and the deviceâs porous microstructure at the micrometer scale.Ph.D
Managing computational complexity through using partitioning, approximation and coordination
Problem: Complex systems are composed of many interdependent subsystems with a level of complexity that exceeds the ability of a single designer. One way to address this problem is to partition the complex design problem into smaller, more manageable design tasks that can be handled by multiple design teams. Partitioning-based design methods are decision support tools that provide mathematical foundations, and computational methods to create such design processes. Managing the interdependency among these subsystems is crucial and a successful design process should meet the requirements of the whole system which needs coordinating the solutions for all the partitions after all.
Approach: Partitioning and coordination should be performed to break down the system into subproblems, solve them and put these solutions together to come up with the ultimate system design. These two tasks of partitioning-coordinating are computationally demanding. Most of the proposed approaches are either computationally very expensive or applicable to only a narrow class of problems. These approaches also use exact methods and eliminate the uncertainty. To manage the computational complexity and uncertainty, we approximate each subproblem after partitioning the whole system. In engineering design, one way to approximate the reality is using surrogate models (SM) to replace the functions which are computationally expensive to solve. This task also is added to the proposed computational framework. Also, to automate the whole process, creating a knowledge-based reusable template for each of these three steps is required. Therefore, in this dissertation, we first partition/decompose the complex system, then, we approximate the subproblem of each partition. Afterwards, we apply coordination methods to guide the solutions of the partitions toward the ultimate integrated system design.
Validation: The partitioning-approximation-coordination design approach is validated using the validation square approach that consists of theoretical and empirical validation. Empirical validation of the design architecture is carried out using two industry-driven problems namely the a hot rod rolling problemâ, âa dam network design problemâ, âa crime prediction problemâ and âa green supply chain design problemâ. Specific sub-problems are formulated within these problem domains to address various research questions identified in this dissertation.
Contributions: The contributions from the dissertation are categorized into new knowledge in five research domains:
âą Creating an approach to building an ensemble of surrogate models when the data is limited â when the data is limited, replacing computationally expensive simulations with accurate, low-dimensional, and rapid surrogates is very important but non-trivial. Therefore, a cross-validation-based ensemble modeling approach is proposed.
âą Using temporal and spatial analysis to manage the uncertainties - when the data is time-based (for example, in meteorological data analysis) and when we are dealing with geographical data (for example, in geographical information systems data analysis), instead of feature-based data analysis time series analysis and spatial statistics are required, respectively. Therefore, when the simulations are for time and space-based data, surrogate models need to be time and space-based. In surrogate modeling, there is a gap in time and space-based models which we address in this dissertation. We created, applied and evaluated the effectiveness of these models for a dam network planning and a crime prediction problem.
âą Removing assumptions regarding the demand distributions in green supply chain networks â in the existent literature for supply chain network design, there are always assumptions about the distribution of the demand. We remove this assumption in the partition-approximate-compose of the green supply chain design problem.
âą Creating new knowledge by proposing a coordination approach for a partitioned and approximated network design. A green supply chain under online (pull economy) and in-person (push economy) shopping channels is designed to demonstrate the utility of the proposed approach