1,667 research outputs found

    Optimal Dynamic Neurocontrol of a Gate-Controlled Series Capacitor in a Multi-Machine Power System

    Get PDF
    This paper presents the design of an optimal dynamic neurocontroller for a new type of FACTS device - the gate controlled series capacitor (GCSC) incorporated in a multi-machine power system. The optimal neurocontroller is developed based on the heuristic dynamic programming (HDP) approach. In addition, a dynamic identifier/model and controller structure using the recurrent neural network trained with backpropagation through time (BPTT) is employed. Simulation results are presented to show the effectiveness of the dynamic neurocontroller and its performance is compared with that of the conventional PI controller under small and large disturbances

    Hypersonic Vehicle Trajectory Optimization and Control

    Get PDF
    Two classes of neural networks have been developed for the study of hypersonic vehicle trajectory optimization and control. The first one is called an 'adaptive critic'. The uniqueness and main features of this approach are that: (1) they need no external training; (2) they allow variability of initial conditions; and (3) they can serve as feedback control. This is used to solve a 'free final time' two-point boundary value problem that maximizes the mass at the rocket burn-out while satisfying the pre-specified burn-out conditions in velocity, flightpath angle, and altitude. The second neural network is a recurrent network. An interesting feature of this network formulation is that when its inputs are the coefficients of the dynamics and control matrices, the network outputs are the Kalman sequences (with a quadratic cost function); the same network is also used for identifying the coefficients of the dynamics and control matrices. Consequently, we can use it to control a system whose parameters are uncertain. Numerical results are presented which illustrate the potential of these methods

    Deep learning applied to computational mechanics: A comprehensive review, state of the art, and the classics

    Full text link
    Three recent breakthroughs due to AI in arts and science serve as motivation: An award winning digital image, protein folding, fast matrix multiplication. Many recent developments in artificial neural networks, particularly deep learning (DL), applied and relevant to computational mechanics (solid, fluids, finite-element technology) are reviewed in detail. Both hybrid and pure machine learning (ML) methods are discussed. Hybrid methods combine traditional PDE discretizations with ML methods either (1) to help model complex nonlinear constitutive relations, (2) to nonlinearly reduce the model order for efficient simulation (turbulence), or (3) to accelerate the simulation by predicting certain components in the traditional integration methods. Here, methods (1) and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3) relying on convolutional neural networks. Pure ML methods to solve (nonlinear) PDEs are represented by Physics-Informed Neural network (PINN) methods, which could be combined with attention mechanism to address discontinuous solutions. Both LSTM and attention architectures, together with modern and generalized classic optimizers to include stochasticity for DL networks, are extensively reviewed. Kernel machines, including Gaussian processes, are provided to sufficient depth for more advanced works such as shallow networks with infinite width. Not only addressing experts, readers are assumed familiar with computational mechanics, but not with DL, whose concepts and applications are built up from the basics, aiming at bringing first-time learners quickly to the forefront of research. History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics, even in well-known references. Positioning and pointing control of a large-deformable beam is given as an example.Comment: 275 pages, 158 figures. Appeared online on 2023.03.01 at CMES-Computer Modeling in Engineering & Science

    Solving Multiple Objective Programming Problems Using Feed-Forward Artificial Neural Networks: The Interactive FFANN Procedure

    Get PDF
    In this paper, we propose a new interactive procedure for solving multiple objective programming problems. Based upon feed-forward artificial neural networks (FFANNs), the method is called the Interactive FFANN Procedure. In the procedure, the decision maker articulates preference information over representative samples from the nondominated set either by assigning preference "values" to the sample solutions or by making pairwise comparisons in a fashion similar to that in the Analytic Hierarchy Process. With this information, a FFANN is trained to represent the decision maker's preference structure. Then, using the FFANN, an optimization problem is solved to search for improved solutions. An example is given to illustrate the Interactive FFANN Procedure. Also, the procedure is compared computationally with the Tchebycheff Method (Steuer and Choo 1983). From the computational results, the Interactive FFANN Procedure produces good results and is robust with regard to the neural network architecture

    Towards learning sentence representation with self-supervision

    Full text link
    Ces dernières années, il y a eu un intérêt croissant dans le domaine de l'apprentissage profond pour le traitement du langage naturel. Plusieurs étapes importantes ont été franchies au cours de la dernière décennie dans divers problèmes, tels que les systèmes de questions-réponses, le résumé de texte, l'analyse des sentiments, etc. Le pré-entraînement des modèles de langage dans une manière auto-supervisé est une partie importante de ces réalisations. Cette thèse explore un ensemble de méthodes auto-supervisées pour apprendre des représentations de phrases à partir d'une grande quantité de données non étiquetées. Nous introduisons également un nouveau modèle de mémoire augmentée pour apprendre des représentations basées sur une structure d'arbre. Nous évaluons et analysons ces représentations sur différentes tâches. Dans le chapitre 1, nous introduisons les bases des réseaux neuronaux avant et des réseaux neuronaux récurrents. Le chapitre se poursuit avec la discussion de l'algorithme de rétropropagation pour former les réseaux neuronaux de flux avant, et la rétropropagation à travers l'algorithme de temps pour former les réseaux neuronaux récurrents. Nous discutons également de trois approches différentes dans le domaine de l’apprentissage de représentations, notamment l'apprentissage supervisé, l'apprentissage non supervisé et une approche relativement nouvelle appelée apprentissage auto-supervisé. Dans le chapitre 2, nous discutons des principes fondamentaux du traitement automatique du langage naturel profond. Plus précisément, nous couvrons les représentations de mots, les représentations de phrases et la modélisation du langage. Nous nous concentrons sur l'évaluation et l'état actuel de la littérature pour ces concepts. Nous finissons le chapitre en discutant le pré-entraînement à grande échelle et le transfert de l’apprentissage dans la langue. Dans le chapitre 3, nous étudions un ensemble de tâches auto-supervisées qui prend avantage de l’estimation contrastive bruitée afin d'apprendre des représentations de phrases à l'aide de données non étiquetées. Nous entraînons notre modèle sur un grand corpus et évaluons nos représentations de phrases apprises sur un ensemble de tâches du langage naturel en aval provenant du cadre SentEval. Notre modèle entraîné sur les tâches proposées surpasse les méthodes non-supervisées sur un sous-ensemble de tâches de SentEval. Dans les chapitres 4, nous introduisons un modèle de mémoire augmentée appelé Ordered Memory, qui présente plusieurs améliorations par rapport aux réseaux de neurones récurrents augmentés par pile traditionnels. Nous introduisons un nouveau mécanisme d'attention de Stick-breaking inspiré par les Ordered Neurons [shen et. al., 2019] pour écrire et effacer la mémoire. Une nouvelle cellule récursive à portes est également introduite pour composer des représentations de bas niveau en des représentations de haut niveau. Nous montrons que ce modèle fonctionne bien sur la tâche d'inférence logique et la tâche ListOps, et il montre également de fortes propriétés de généralisation dans ces tâches. Enfin, nous évaluons notre modèle sur les tâches (binaire et multi-classe) SST (Stanford Sentiment Treebank) et rapportons des résultats comparables à l’état de l’art sur ces tâches.In chapter 1, we introduce the basics of feed forward neural networks and recurrent neural networks. The chapter continues with the discussion of the backpropagation algorithm to train feed forward neural networks, and the backpropagation through time algorithm to train recurrent neural networks. We also discuss three different approaches in learning representations, namely supervised learning, unsupervised learning, and a relatively new approach called self-supervised learning. In chapter 2, we talk about the fundamentals of deep natural language processing. Specifically, we cover word representations, sentence representations, and language modelling. We focus on the evaluation and current state of the literature for these concepts. We close the chapter by discussing large scale pre-training and transfer learning in language. In chapter 3, we investigate a set of self-supervised tasks that take advantage of noise contrastive estimation in order to learn sentence representations using unlabeled data. We train our model on a large corpora and evaluate our learned sentence representations on a set of downstream natural language tasks from the SentEval framework. Our model trained on the proposed tasks outperforms unsupervised methods on a subset of tasks from SentEval. In chapter 4, we introduce a memory augmented model called Ordered Memory with several improvements over traditional stack-augmented recurrent neural networks. We introduce a new Stick-breaking attention mechanism inspired by Ordered Neurons [Shen et.al., 2019] to write in and erase from the memory. A new Gated Recursive Cell is also introduced to compose low level representations into higher level ones. We show that this model performs well on the logical inference task and the ListOps task, and it also shows strong generalization properties in these tasks. Finally, we evaluate our model on the SST (Stanford Sentiment Treebank) tasks (binary and fine-grained) and report results that are comparable with state-of-the-art on these tasks

    Self-Modeling Neural Systems

    Get PDF
    Goal-directedness is a fundamental property of all living things, but it is perhaps most easily identified in the movement patterns of animals. Ethologists have divided the basic forms of animal behavior into three categories: reproductive, defensive, and ingestive, all of which depend on the complex orchestration of motor control. In this dissertation, we use the framework of optimal control theory to model goal-directed behavior and repurpose it in new ways. We demonstrate a method for creating a hierarchical control network in which higher levels of the control hierarchy deal with tasks of increased abstractness. In a two-level system, the lower-level deals with short time-scale, low-dimensional motor control, and the higher-level is charged with longer time-scale, higher-dimensional planning. Central to our approach to joining the levels is the construction of a forward model of the behavior of the lower-level by the higher-level. Thus, we extend ideas of optimal control theory from controlling a "plant" to controlling a controller. We apply our method to the example problem of guiding a semi-truck in reverse around a field of obstacles. The lower-level controller drives the truck, and the higher-level detects obstacles and plans routes around them. In other work, we consider whether it is possible for a neural system that obeys certain biological constraints to solve optimal control problems. We exhibit a simple method to train a different kind of internal model, a neural network model of the Jacobian of the plant, and we integrate the internal model in a forward-in-time computation that produces an optimal feedback controller. We apply our method to two well-known model problems in optimal control, the torque-limited pendulum and cart-pole swing-up problems

    Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making
    • …
    corecore