59,942 research outputs found
Neural Networks: Training and Application to Nonlinear System Identification and Control
This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise
Neural Metamaterial Networks for Nonlinear Material Design
Nonlinear metamaterials with tailored mechanical properties have applications
in engineering, medicine, robotics, and beyond. While modeling their
macromechanical behavior is challenging in itself, finding structure parameters
that lead to ideal approximation of high-level performance goals is a
challenging task. In this work, we propose Neural Metamaterial Networks (NMN)
-- smooth neural representations that encode the nonlinear mechanics of entire
metamaterial families. Given structure parameters as input, NMN return
continuously differentiable strain energy density functions, thus guaranteeing
conservative forces by construction. Though trained on simulation data, NMN do
not inherit the discontinuities resulting from topological changes in finite
element meshes. They instead provide a smooth map from parameter to performance
space that is fully differentiable and thus well-suited for gradient-based
optimization. On this basis, we formulate inverse material design as a
nonlinear programming problem that leverages neural networks for both objective
functions and constraints. We use this approach to automatically design
materials with desired strain-stress curves, prescribed directional stiffness
and Poisson ratio profiles. We furthermore conduct ablation studies on network
nonlinearities and show the advantages of our approach compared to native-scale
optimization
On the smoothness of nonlinear system identification
We shed new light on the \textit{smoothness} of optimization problems arising
in prediction error parameter estimation of linear and nonlinear systems. We
show that for regions of the parameter space where the model is not
contractive, the Lipschitz constant and -smoothness of the objective
function might blow up exponentially with the simulation length, making it hard
to numerically find minima within those regions or, even, to escape from them.
In addition to providing theoretical understanding of this problem, this paper
also proposes the use of multiple shooting as a viable solution. The proposed
method minimizes the error between a prediction model and the observed values.
Rather than running the prediction model over the entire dataset, multiple
shooting splits the data into smaller subsets and runs the prediction model
over each subset, making the simulation length a design parameter and making it
possible to solve problems that would be infeasible using a standard approach.
The equivalence to the original problem is obtained by including constraints in
the optimization. The new method is illustrated by estimating the parameters of
nonlinear systems with chaotic or unstable behavior, as well as neural
networks. We also present a comparative analysis of the proposed method with
multi-step-ahead prediction error minimization
Neural networks for small scale ORC optimization
This study concerns a thermodynamic and technical optimization of a small scale Organic Rankine Cycle system for waste heat
recovery applications. An Artificial Neural Network (ANN) has been used to develop a thermodynamic model to be used for
the maximization of the production of power while keeping the size of the heat exchangers and hence the cost of the plant at its
minimum. R1234yf has been selected as the working fluid. The results show that the use of ANN is promising in solving complex
nonlinear optimization problems that arise in the field of thermodynamics
Design optimization applied in structural dynamics
This paper introduces the design optimization strategies, especially for structures which have dynamic constraints. Design optimization involves first the modeling and then the optimization of the problem. Utilizing the Finite Element (FE) model of a structure directly in an optimization process requires a long computation time. Therefore the Backpropagation Neural Networks (NNs) are introduced as a so called surrogate model for the FE model. Optimization techniques mentioned in this study cover the Genetic Algorithm (GA) and the Sequential Quadratic Programming (SQP) methods. For the applications of the introduced techniques, a multisegment cantilever beam problem under the constraints of its first and second natural frequency has been selected and solved using four different approaches
- …