212,169 research outputs found

    Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    Get PDF
    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl

    Topology Optimization for Artificial Neural Networks

    Get PDF
    This thesis examines the feasibility of implementing two simple optimization methods, namely the Weights Power method (Hagiwara, 1994) and the Tabu Search method (Gupta & Raza, 2020), within an existing framework. The study centers around the generation of artificial neural networks using these methods, assessing their performance in terms of both accuracy and the capacity to reduce components within the Artificial Neural Network’s (ANN) topology. The evaluation is conducted on three classification datasets: Air Quality (Shahane, 2021), Diabetes (Soni, 2021), and MNIST (Deng, 2012). The main performance metric used is accuracy, which measures the network\u27s predictive capability for the classification datasets. The evaluation also considers the reduction of network components achieved by the methods as an indicator of topology optimization. Python, along with the Scikit-learn framework, is employed to implement the two methods, while the evaluation is conducted in the cloud-based environment of Kaggle Notebooks. The evaluation results are collected and analyzed using the Pandas data analysis framework, with Microsoft Excel used for further analysis and data inspection. The Weights Power method demonstrates superior performance on the Air Quality and MNIST datasets, whereas the Tabu Search method performs better on the Diabetes dataset. However, the Weights Power method encounters issues with local minima, leading to one of its stop conditions being triggered. On the other hand, the Tabu Search method faces challenges with the MNIST dataset due to its predetermined limits and restricted scope of changes it can apply to the neural network. The Weights Power method seems to have reached its optimal performance level within the current implementation and evaluation criteria, implying limited potential for future research avenues. In contrast, to enhance the dynamic nature of the Tabu Search method, further investigation is recommended. This could entail modifying the method\u27s capability to adapt its stop conditions during runtime and incorporating a mechanism to scale the magnitude of changes made during the optimization process. By enabling the method to prioritize larger changes earlier in the process and gradually introducing smaller changes towards the conclusion, its effectiveness could be enhanced

    A new approach for transport network design and optimization

    Get PDF
    The solution of the transportation network optimization problem actually requires, in most cases, very intricate and powerful computer resources, so that it is not feasible to use classical algorithms. One promising way is to use stochastic search techniques. In this context, Genetic Algorithms (GAs) seem to be - among all the available methodologies- one of the most efficient methods able to approach transport network design and optimization. Particularly, this paper will focus the attention on the possibility of modelling and optimizing Public Bus Networks by means of GAs. In the proposed algorithm, the specific class of Cumulative GAs(CGAs) will be used for solving the first level of the network optimization problem, while a classical assignment model ,or alternatively a neural network approach ,will be adopted for the Fitness Function(FF) evaluation. CGAs will then be utilized in order to generate new populations of networks, which will be evaluated by means of a suitable software package. For each new solution some indicators will be calculated .A unique FF will be finally evaluated by means of a multicriteria method. Altough the research is still in a preliminary stage, the emerging first results concerning numerical cases show very good perspectives for this new approach. A test in real cases will also follow.

    Learning to learn with an evolutionary strategy applied to variational quantum algorithms

    Full text link
    Variational Quantum Algorithms (VQAs) employ quantum circuits parameterized by UU, optimized using classical methods to minimize a cost function. While VQAs have found broad applications, certain challenges persist. Notably, a significant computational burden arises during parameter optimization. The prevailing ``parameter shift rule'' mandates a double evaluation of the cost function for each parameter. In this article, we introduce a novel optimization approach named ``Learning to Learn with an Evolutionary Strategy'' (LLES). LLES unifies ``Learning to Learn'' and ``Evolutionary Strategy'' methods. ``Learning to Learn'' treats optimization as a learning problem, utilizing recurrent neural networks to iteratively propose VQA parameters. Conversely, ``Evolutionary Strategy'' employs gradient searches to estimate function gradients. Our optimization method is applied to two distinct tasks: determining the ground state of an Ising Hamiltonian and training a quantum neural network. Results underscore the efficacy of this novel approach. Additionally, we identify a key hyperparameter that significantly influences gradient estimation using the ``Evolutionary Strategy'' method
    • …
    corecore