468 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A representer theorem for deep kernel learning

    Full text link
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods

    Physics-informed neural network methods based on Miura transformations and discovery of new localized wave solutions

    Full text link
    We put forth two physics-informed neural network (PINN) schemes based on Miura transformations and the novelty of this research is the incorporation of Miura transformation constraints into neural networks to solve nonlinear PDEs. The most noteworthy advantage of our method is that we can simply exploit the initial-boundary data of a solution of a certain nonlinear equation to obtain the data-driven solution of another evolution equation with the aid of PINNs and during the process, the Miura transformation plays an indispensable role of a bridge between solutions of two separate equations. It is tailored to the inverse process of the Miura transformation and can overcome the difficulties in solving solutions based on the implicit expression. Moreover, two schemes are applied to perform abundant computational experiments to effectively reproduce dynamic behaviors of solutions for the well-known KdV equation and mKdV equation. Significantly, new data-driven solutions are successfully simulated and one of the most important results is the discovery of a new localized wave solution: kink-bell type solution of the defocusing mKdV equation and it has not been previously observed and reported to our knowledge. It provides a possibility for new types of numerical solutions by fully leveraging the many-to-one relationship between solutions before and after Miura transformations. Performance comparisons in different cases as well as advantages and disadvantages analysis of two schemes are also discussed. On the basis of the performance of two schemes and no free lunch theorem, they both have their own merits and thus more appropriate one should be chosen according to specific cases

    Adaptive machine learning-based surrogate modeling to accelerate PDE-constrained optimization in enhanced oil recovery

    Get PDF
    In this contribution, we develop an efficient surrogate modeling framework for simulation-based optimization of enhanced oil recovery, where we particularly focus on polymer flooding. The computational approach is based on an adaptive training procedure of a neural network that directly approximates an input-output map of the underlying PDE-constrained optimization problem. The training process thereby focuses on the construction of an accurate surrogate model solely related to the optimization path of an outer iterative optimization loop. True evaluations of the objective function are used to finally obtain certified results. Numerical experiments are given to evaluate the accuracy and efficiency of the approach for a heterogeneous five-spot benchmark problem.publishedVersio

    RAR-PINN algorithm for the data-driven vector-soliton solutions and parameter discovery of coupled nonlinear equations

    Full text link
    This work aims to provide an effective deep learning framework to predict the vector-soliton solutions of the coupled nonlinear equations and their interactions. The method we propose here is a physics-informed neural network (PINN) combining with the residual-based adaptive refinement (RAR-PINN) algorithm. Different from the traditional PINN algorithm which takes points randomly, the RAR-PINN algorithm uses an adaptive point-fetching approach to improve the training efficiency for the solutions with steep gradients. A series of experiment comparisons between the RAR-PINN and traditional PINN algorithms are implemented to a coupled generalized nonlinear Schr\"{o}dinger (CGNLS) equation as an example. The results indicate that the RAR-PINN algorithm has faster convergence rate and better approximation ability, especially in modeling the shape-changing vector-soliton interactions in the coupled systems. Finally, the RAR-PINN method is applied to perform the data-driven discovery of the CGNLS equation, which shows the dispersion and nonlinear coefficients can be well approximated
    corecore