189,913 research outputs found

    Design Optimization Utilizing Dynamic Substructuring and Artificial Intelligence Techniques

    Get PDF
    In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure. In this paper, a design optimization strategy based on the integration of the Component Mode Synthesis (CMS) method with numerical optimization techniques is presented. For reasons of numerical efficiency, a Finite Element (FE) model is represented by a surrogate model which is a function of the design parameters. The surrogate model is obtained in four steps: First, the reduced FE models of the components are derived using the CMS method. Then the components are aassembled to obtain the entire structural response. Afterwards the dynamic behavior is determined for a number of design parameter settings. Finally, the surrogate model representing the dynamic behavior is obtained. In this research, the surrogate model is determined using the Backpropagation Neural Networks which is then optimized using the Genetic Algorithms and Sequential Quadratic Programming method. The application of the introduced techniques is demonstrated on a simple test problem

    Engineering design applications of surrogate-assisted optimization techniques

    No full text
    The construction of models aimed at learning the behaviour of a system whose responses to inputs are expensive to measure is a branch of statistical science that has been around for a very long time. Geostatistics has pioneered a drive over the last half century towards a better understanding of the accuracy of such ‘surrogate’ models of the expensive function. Of particular interest to us here are some of the even more recent advances related to exploiting such formulations in an optimization context. While the classic goal of the modelling process has been to achieve a uniform prediction accuracy across the domain, an economical optimization process may aim to bias the distribution of the learning budget towards promising basins of attraction. This can only happen, of course, at the expense of the global exploration of the space and thus finding the best balance may be viewed as an optimization problem in itself. We examine here a selection of the state of-the-art solutions to this type of balancing exercise through the prism of several simple, illustrative problems, followed by two ‘real world’ applications: the design of a regional airliner wing and the multi-objective search for a low environmental impact hous

    Analysis-of-marginal-Tail-Means (ATM): a robust method for discrete black-box optimization

    Full text link
    We present a new method, called Analysis-of-marginal-Tail-Means (ATM), for effective robust optimization of discrete black-box problems. ATM has important applications to many real-world engineering problems (e.g., manufacturing optimization, product design, molecular engineering), where the objective to optimize is black-box and expensive, and the design space is inherently discrete. One weakness of existing methods is that they are not robust: these methods perform well under certain assumptions, but yield poor results when such assumptions (which are difficult to verify in black-box problems) are violated. ATM addresses this via the use of marginal tail means for optimization, which combines both rank-based and model-based methods. The trade-off between rank- and model-based optimization is tuned by first identifying important main effects and interactions, then finding a good compromise which best exploits additive structure. By adaptively tuning this trade-off from data, ATM provides improved robust optimization over existing methods, particularly in problems with (i) a large number of factors, (ii) unordered factors, or (iii) experimental noise. We demonstrate the effectiveness of ATM in simulations and in two real-world engineering problems: the first on robust parameter design of a circular piston, and the second on product family design of a thermistor network

    Flux Weakening Strategy Optimization for Five-Phase PM Machine with Concentrated Windings

    Get PDF
    The paper applies an Efficient Global Optimization method (EGO) to improve the efficiency, in flux weakening region, of a given 5-phase Permanent Magnet (PM) machine. An optimal control for the four independent currents is thus defined. Moreover, a modification proposal of the machine geometry is added to the optimization process of the global drive. The effectiveness of the method allows solving the challenge which consists in taking into account inside the control strategy the eddy-current losses in magnets and iron. In fact, magnet losses are a critical point to protect the machine from demagnetization in flux-weakening region. But these losses, which highly depend on magnetic state of the machine, must be calculated by Finite Element Method (FEM) to be accurate. The FEM has the drawback to be time consuming. It is why a direct optimization using FEM is critical. EGO method, using sparingly FEM, allows to find a feasible solution to this hard optimization problem of control and design of multi-phase drive

    Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control

    Get PDF
    Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and no analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of RBF surrogate modeling for both the objective and the constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so. In this work we present a new self-adjusting algorithm SACOBRA, which is based on COBRA and capable to achieve high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms any COBRA algorithm with fixed parameter setting. We analyze the importance of the several new elements in SACOBRA and find that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons behind and get in this way a better understanding of high-quality RBF surrogate modeling
    • 

    corecore