774 research outputs found

    Study of hybrid strategies for multi-objective optimization using gradient based methods and evolutionary algorithms

    Get PDF
    Most of the optimization problems encountered in engineering have conflicting objectives. In order to solve these problems, genetic algorithms (GAs) and gradient-based methods are widely used. GAs are relatively easy to implement, because these algorithms only require first-order information of the objectives and constraints. On the other hand, GAs do not have a standard termination condition and therefore they may not converge to the exact solutions. Gradient-based methods, on the other hand, are based on first- and higher-order information of the objectives and constraints. These algorithms converge faster to the exact solutions in solving single-objective optimization problems, but are inefficient for multi-objective optimization problems (MOOPs) and unable to solve those with non-convex objective spaces. The work in this dissertation focuses on developing a hybrid strategy for solving MOOPs based on feasible sequential quadratic programming (FSQP) and nondominated sorting genetic algorithm II (NSGA-II). The hybrid algorithms developed in this dissertation are tested using benchmark problems and evaluated based on solution distribution, solution accuracy, and execution time. Based on these performance factors, the best hybrid strategy is determined and found to be generally efficient with good solution distributions in most of the cases studied. The best hybrid algorithm is applied to the design of a crushing tube and is shown to have relatively well-distributed solutions and good efficiency compared to solutions obtained by NSGA-II and FSQP alone

    Hybrid nature-inspired computation methods for optimization

    Get PDF
    The focus of this work is on the exploration of the hybrid Nature-Inspired Computation (NIC) methods with application in optimization. In the dissertation, we first study various types of the NIC algorithms including the Clonal Selection Algorithm (CSA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Simulated Annealing (SA), Harmony Search (HS), Differential Evolution (DE), and Mind Evolution Computing (MEC), and propose several new fusions of the NIC techniques, such as CSA-DE, HS-DE, and CSA-SA. Their working principles, structures, and algorithms are analyzed and discussed in details. We next investigate the performances of our hybrid NIC methods in handling nonlinear, multi-modal, and dynamical optimization problems, e.g., nonlinear function optimization, optimal LC passive power filter design, and optimization of neural networks and fuzzy classification systems. The hybridization of these NIC methods can overcome the shortcomings of standalone algorithms while still retaining all the advantages. It has been demonstrated using computer simulations that the proposed hybrid NIC approaches are capable of yielding superior optimization performances over the individual NIC methods as well as conventional methodologies with regard to the search efficiency, convergence speed, and quantity and quality of the optimal solutions achieved

    Radial Basis Function Neural Networks : A Review

    Get PDF
    Radial Basis Function neural networks (RBFNNs) represent an attractive alternative to other neural network models. One reason is that they form a unifying link between function approximation, regularization, noisy interpolation, classification and density estimation. It is also the case that training RBF neural networks is faster than training multi-layer perceptron networks. RBFNN learning is usually split into an unsupervised part, where center and widths of the Gaussian basis functions are set, and a linear supervised part for weight computation. This paper reviews various learning methods for determining centers, widths, and synaptic weights of RBFNN. In addition, we will point to some applications of RBFNN in various fields. In the end, we name software that can be used for implementing RBFNNs

    Lower-energy conformers search of TPP-1 polypeptide via hybrid particle swarm optimization and genetic algorithm

    Get PDF
    Low-energy conformation search on biological macromolecules remains a challenge in biochemical experiments and theoretical studies. Finding efficient approaches to minimize the energy of peptide structures is critically needed for researchers either studying peptide-protein interactions or designing peptide drugs. In this study, we aim to develop a heuristic-based algorithm to efficiently minimize a promising PD-L1 inhibiting polypeptide, TPP-1, and build its low-energy conformer pool to advance its subsequent structure optimization and molecular docking studies. Through our study, we find that, using backbone dihedral angles as the decision variables, both PSO and GA can outperform other existing heuristic approaches in optimizing the structure of Met-enkephalin, a benchmarking pentapeptide for evaluating the efficiency of conformation optimizers. Using the established algorithm pipeline, hybridizing PSO and GA minimized TPP-1 structure efficiently and a low-energy pool was built with an acceptable computational cost (a couple days using a single laptop). Remarkably, the efficiency of hybrid PSO-GA is hundreds-fold higher than the conventional Molecular Dynamic simulations running under the force filed. Meanwhile, the stereo-chemical quality of the minimized structures was validated using Ramachandran plot. In summary, hybrid PSO-GA minimizes TPP-1 structure efficiently and yields a low-energy conformer pool within a reasonably short time period. Overall, our approach can be extended to biochemical research to speed up the peptide conformation determinations and hence can facilitate peptide-involved drug development

    Computational aspects of electromagnetic NDE phenomena

    Get PDF
    The development of theoretical models that characterize various physical phenomena is extremely crucial in all engineering disciplines. In nondestructive evaluation (NDE), theoretical models are used extensively to understand the physics of material/energy interaction, optimize experimental design parameters and solve the inverse problem of defect characterization. This dissertation describes methods for developing computational models for electromagnetic NDE applications. Two broad classes of issues that are addressed in this dissertation are related to (i) problem formulation and (ii) implementation of computers;The two main approaches for solving physical problems in NDE are the differential and integral equations. The relative advantages and disadvantages of the two approaches are illustrated and models are developed to simulate electromagnetic scattering from objects or inhomogeneities embedded in multilayered media which is applicable in many NDE problems. The low storage advantage of the differential approach and the finite solution domain feature of the integral approach are exploited. Hybrid techniques and other efficient modeling techniques are presented to minimize the storage requirements for both approaches;The second issue of computational models is the computational resources required for implementation. Implementations on conventional sequential computers, parallel architecture machines and more recent neural computers are presented. An example which requires the use of massive parallel computing is given where a probability of detection model is built for eddy current testing of 3D objects. The POD model based on the finite element formulation is implemented on an NCUBE parallel computer. The linear system of equations is solved using direct and iterative methods. The implementations are designed to minimize the interprocessor communication and optimize the number of simultaneous model runs to obtain a maximum effective speedup;Another form of parallel computing is the more recent neurocomputer which depends on building an artificial neural network composed of numerous simple neurons. Two classes of neural networks have been used to solve electromagnetic NDE inverse problems. The first approach depends on a direct solution of the governing integral equation and is done using a Hopfield type neural network. Design of the network structure and parameters is presented. The second approach depends on developing a mathematical transform between the input and output space of the problem. A multilayered perceptron type neural network is invoked for this implementation. The network is augmented to build an incremental learning network which is motivated by the dynamic and modular features of the human brain

    Automatic differentiation in machine learning: a survey

    Get PDF
    Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.Comment: 43 pages, 5 figure

    Application of an improved Levenberg-Marquardt back propagation neural network to gear fault level identification

    Get PDF
    Chip fault is one of the most frequently occurring damage modes in gears. Identifying different chip levels, especially for incipient chip is a challenge work in gear fault analysis. In order to classify the different gear chip levels automatically and accurately, this paper developed a fast and accurate method. In this method, features which are specially designed for gear damage detection are extracted based on a revised time synchronous averaging algorithm to character the gear conditions. Then, a modified Levenberg-Marquardt training back propagation neural network is utilized to identify the gear chip levels. In this modified neural network, damping factor and dynamic momentum are optimized simultaneously. Fisher iris data which is the machine learning public data is used to validate the performance of the improved neural network. Gear chip fault experiments were conducted and vibration signals were captured under different loads and motor speeds. Finally, the proposed methods are applied to identify the gear chip levels. The classification results of public data and gear chip fault experiment data both demonstrated that the improved neural network gets a better performance in accuracy and speed compared to the neural networks which are trained by El-Alfy’s and Norgaard’s Levenberg-Marquardt algorithm. Therefore, the proposed method is more suitable for on-line condition monitoring and fault diagnosis
    • …
    corecore