1,428 research outputs found

    Reduced-order modelling for high-speed aerial weapon aerodynamics

    Get PDF
    In this work a high-fidelity low-cost surrogate of a computational fluid dynamics analysis tool was developed. This computational tool is composed of general and physics- based approximation methods by which three dimensional high-speed aerodynamic flow- field predictions are made with high efficiency and an accuracy which is comparable with that of CFD. The tool makes use of reduced-basis methods that are suitable for both linear and non-linear problems, whereby the basis vectors are computed via the proper orthogonal decomposition (POD) of a training dataset or a set of observations. The surrogate model was applied to two flow problems related to high-speed weapon aerodynamics. Comparisons of surrogate model predictions with high-fidelity CFD simulations suggest that POD-based reduced-order modelling together with response surface methods provide a reliable and robust approach for efficient and accurate predictions. In contrast to the many modelling efforts reported in the literature, this surrogate model provides access to information about the whole flow-field. In an attempt to reduce the up-front cost necessary to generate the training dataset from which the surrogate model is subsequently developed, a variable-fidelity POD- based reduced-order modelling method is proposed in this work for the first time. In this model, the scalar coefficients which are obtained by projecting the solution vectors onto the basis vectors, are mapped between spaces of low and high fidelities, to achieve high- fidelity predictions with complete flow-field information. In general, this technique offers an automatic way of fusing variable-fidelity data through interpolation and extrapolation schemes together with reduced-order modelling (ROM). Furthermore, a study was undertaken to investigate the possibility of modelling the transonic flow over an aerofoil using a kernel POD–based reduced-order modelling method. By using this type of ROM it was noticed that the weak non-linear features of the transonic flow are accurately modelled using a small number of basis vectors. The strong non-linear features are only modelled accurately by using a large number of basis vectors

    Surrogate Modeling of Aerodynamic Simulations for Multiple Operating Conditions Using Machine Learning

    Get PDF
    International audienceThis paper describes a methodology, called local decomposition method, which aims at building a surrogate model based on steady turbulent aerodynamic fields at multiple operating conditions. The various shapes taken by the aerodynamic fields due to the multiple operation conditions pose real challenges as well as the computational cost of the high-fidelity simulations. The developed strategy mitigates these issues by combining traditional surrogate models and machine learning. The central idea is to separate the solutions with a subsonic behavior from the transonic and high-gradient solutions. First, a shock sensor extracts a feature corresponding to the presence of discontinuities, easing the clustering of the simulations by an unsupervised learning algorithm. Second, a supervised learning algorithm divides the parameter space into subdomains, associated to different flow regimes. Local reduced-order models are built on each subdomain using proper orthogonal decomposition coupled with a multivariate interpolation tool. Finally, an improved resampling technique taking advantage of the subdomain decomposition minimizes the redundancy of sampling. The methodology is assessed on the turbulent two-dimensional flow around the RAE2822 transonic airfoil. It exhibits a significant improvement in terms of prediction accuracy for the developed strategy compared with the classical method of surrogate modeling

    Airfoil analysis and design using surrogate models

    Get PDF
    A study was performed to compare two different methods for generating surrogate models for the analysis and design of airfoils. Initial research was performed to compare the accuracy of surrogate models for predicting the lift and drag of an airfoil with data collected from highidelity simulations using a modern CFD code along with lower-order models using a panel code. This was followed by an evaluation of the Class Shape Trans- formation (CST) method for parameterizing airfoil geometries as a prelude to the use of surrogate models for airfoil design optimization and the implementation of software to use CST to modify airfoil shapes as part of the airfoil design process. Optimization routines were coupled with surrogate modeling techniques to study the accuracy and efficiency of the surrogate models to produce optimal airfoil shapes. Finally, the results of the current research are summarized, and suggestions are made for future research

    MMGP: a Mesh Morphing Gaussian Process-based machine learning method for regression of physical problems under non-parameterized geometrical variability

    Full text link
    When learning simulations for modeling physical phenomena in industrial designs, geometrical variabilities are of prime interest. While classical regression techniques prove effective for parameterized geometries, practical scenarios often involve the absence of shape parametrization during the inference stage, leaving us with only mesh discretizations as available data. Learning simulations from such mesh-based representations poses significant challenges, with recent advances relying heavily on deep graph neural networks to overcome the limitations of conventional machine learning approaches. Despite their promising results, graph neural networks exhibit certain drawbacks, including their dependency on extensive datasets and limitations in providing built-in predictive uncertainties or handling large meshes. In this work, we propose a machine learning method that do not rely on graph neural networks. Complex geometrical shapes and variations with fixed topology are dealt with using well-known mesh morphing onto a common support, combined with classical dimensionality reduction techniques and Gaussian processes. The proposed methodology can easily deal with large meshes without the need for explicit shape parameterization and provides crucial predictive uncertainties, which are essential for informed decision-making. In the considered numerical experiments, the proposed method is competitive with respect to existing graph neural networks, regarding training efficiency and accuracy of the predictions

    Empowering Materials Processing and Performance from Data and AI

    Get PDF
    Third millennium engineering address new challenges in materials sciences and engineering. In particular, the advances in materials engineering combined with the advances in data acquisition, processing and mining as well as artificial intelligence allow for new ways of thinking in designing new materials and products. Additionally, this gives rise to new paradigms in bridging raw material data and processing to the induced properties and performance. This present topical issue is a compilation of contributions on novel ideas and concepts, addressing several key challenges using data and artificial intelligence, such as:- proposing new techniques for data generation and data mining;- proposing new techniques for visualizing, classifying, modeling, extracting knowledge, explaining and certifying data and data-driven models;- processing data to create data-driven models from scratch when other models are absent, too complex or too poor for making valuable predictions;- processing data to enhance existing physic-based models to improve the quality of the prediction capabilities and, at the same time, to enable data to be smarter; and- processing data to create data-driven enrichment of existing models when physics-based models exhibit limits within a hybrid paradigm
    • 

    corecore