7,611 research outputs found

    Symplectic Model Reduction of Hamiltonian Systems

    Full text link
    In this paper, a symplectic model reduction technique, proper symplectic decomposition (PSD) with symplectic Galerkin projection, is proposed to save the computational cost for the simplification of large-scale Hamiltonian systems while preserving the symplectic structure. As an analogy to the classical proper orthogonal decomposition (POD)-Galerkin approach, PSD is designed to build a symplectic subspace to fit empirical data, while the symplectic Galerkin projection constructs a reduced Hamiltonian system on the symplectic subspace. For practical use, we introduce three algorithms for PSD, which are based upon: the cotangent lift, complex singular value decomposition, and nonlinear programming. The proposed technique has been proven to preserve system energy and stability. Moreover, PSD can be combined with the discrete empirical interpolation method to reduce the computational cost for nonlinear Hamiltonian systems. Owing to these properties, the proposed technique is better suited than the classical POD-Galerkin approach for model reduction of Hamiltonian systems, especially when long-time integration is required. The stability, accuracy, and efficiency of the proposed technique are illustrated through numerical simulations of linear and nonlinear wave equations.Comment: 25 pages, 13 figure

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p
    • …
    corecore