1,442 research outputs found

    34th Midwest Symposium on Circuits and Systems-Final Program

    Get PDF
    Organized by the Naval Postgraduate School Monterey California. Cosponsored by the IEEE Circuits and Systems Society. Symposium Organizing Committee: General Chairman-Sherif Michael, Technical Program-Roberto Cristi, Publications-Michael Soderstrand, Special Sessions- Charles W. Therrien, Publicity: Jeffrey Burl, Finance: Ralph Hippenstiel, and Local Arrangements: Barbara Cristi

    Neural networks in control engineering

    Get PDF
    The purpose of this thesis is to investigate the viability of integrating neural networks into control structures. These networks are an attempt to create artificial intelligent systems with the ability to learn and remember. They mathematically model the biological structure of the brain and consist of a large number of simple interconnected processing units emulating brain cells. Due to the highly parallel and consequently computationally expensive nature of these networks, intensive research in this field has only become feasible due to the availability of powerful personal computers in recent years. Consequently, attempts at exploiting the attractive learning and nonlinear optimization characteristics of neural networks have been made in most fields of science and engineering, including process control. The control structures suggested in the literature for the inclusion of neural networks in control applications can be divided into four major classes. The first class includes approaches in which the network forms part of an adaptive mechanism which modulates the structure or parameters of the controller. In the second class the network forms part of the control loop and replaces the conventional control block, thus leading to a pure neural network control law. The third class consists of topologies in which neural networks are used to produce models of the system which are then utilized in the control structure, whilst the fourth category includes suggestions which are specific to the problem or system structure and not suitable for a generic neural network-based-approach to control problems. Although several of these approaches show promising results, only model based structures are evaluated in this thesis. This is due to the fact that many of the topologies in other classes require system estimation to produce the desired network output during training, whereas the training data for network models is obtained directly by sampling the system input(s) and output(s). Furthermore, many suggested structures lack the mathematical motivation to consider them for a general structure, whilst the neural network model topologies form natural extensions of their linear model based origins. Since it is impractical and often impossible to collect sufficient training data prior to implementing the neural network based control structure, the network models have to be suited to on-line training during operation. This limits the choice of network topologies for models to those that can be trained on a sample by sample basis (pattern learning) and furthermore are capable of learning even when the variation in training data is relatively slow as is the case for most controlled dynamic systems. A study of feedforward topologies (one of the main classes of networks) shows that the multilayer perceptron network with its backpropagation training is well suited to model nonlinear mappings but fails to learn and generalize when subjected to slow varying training data. This is due to the global input interpretation of this structure, in which any input affects all hidden nodes such that no effective partitioning of the input space can be achieved. This problem is overcome in a less flexible feedforward structure, known as regular Gaussian network. In this network, the response of each hidden node is limited to a -sphere around its center and these centers are fixed in a uniform distribution over the entire input space. Each input to such a network is therefore interpreted locally and only effects nodes with their centers in close proximity. A deficiency common to all feedforward networks, when considered as models for dynamic systems, is their inability to conserve previous outputs and states for future predictions. Since this absence of dynamic capability requires the user to identify the order of the system prior to training and is therefore not entirely self-learning, more advanced network topologies are investigated. The most versatile of these structures, known as a fully recurrent network, re-uses the previous state of each of its nodes for subsequent outputs. However, despite its superior modelling capability, the tests performed using the Williams and Zipser training algorithm show that such structures often fail to converge and require excessive computing power and time, when increased in size. Despite its rigid structure and lack of dynamic capability, the regular Gaussian network produces the most reliable and robust models and was therefore selected for the evaluations in this study. To overcome the network initialization problem, found when using a pure neural network model, a combination structure· _in which the network operates in parallel with a mathematical model is suggested. This approach allows the controller to be implemented without any prior network training and initially relies purely on the mathematical model, much like conventional approaches. The network portion is then trained during on-line operation in order to improve the model. Once trained, the enhanced model can be used to improve the system response, since model exactness plays an important role in the control action achievable with model based structures. The applicability of control structures based on neural network models is evaluated by comparing the performance of two network approaches to that of a linear structure, using a simulation of a nonlinear tank system. The first network controller is developed from the internal model control (IMC) structure, which includes a forward and inverse model of the system to be controlled. Both models can be replaced by a combination of mathematical and neural topologies, the network portion of which is trained on-line to compensate for the discrepancies between the linear model _ and nonlinear system. Since the network has no dynamic ·capacity, .former system outputs are used as inputs to the forward and inverse model. Due to this direct feedback, the trained structure can be tuned to perform within limits not achievable using a conventional linear system. As mentioned previously the IMC structure uses both forward and inverse models. Since the control law requires that these models are exact inverses, an iterative inversion algorithm has to be used to improve the values produced by the inverse combination model. Due to deadtimes and right-half-plane zeroes, many systems are furthermore not directly invertible. Whilst such unstable elements can be removed from mathematical models, the inverse network is trained directly from the forward model and can not be compensated. These problems could be overcome by a control structure for which only a forward model is required. The neural predictive controller (NPC) presents such a topology. Based on the optimal control philosophy, this structure uses a model to predict several future outputs. The errors between these and the desired output are then collected to form the cost function, which may also include other factors such as the magnitude of the change in input. The input value that optimally fulfils all the objectives used to formulate the cost function, can then be found by locating its minimum. Since the model in this structure includes a neural network, the optimization can not be formulated in a closed mathematical form and has to be performed using a numerical method. For the NPC topology, as for the neural network IMC structure, former system outputs are fed back to the model and again the trained network approach produces results not achievable with a linear model. Due to the single network approach, the NPC topology furthermore overcomes the limitations described for the neural network IMC structure and can be extended to include multivariable systems. This study shows that the nonlinear modelling capability of neural networks can be exploited to produce learning control structures with improved responses for nonlinear systems. Many of the difficulties described are due to the computational burden of these networks and associated algorithms. These are likely to become less significant due to the rapid development in computer technology and advances in neural network hardware. Although neural network based control structures are unlikely to replace the well understood linear topologies, which are adequate for the majority of applications, they might present a practical alternative where (due to nonlinearity or modelling errors) the conventional controller can not achieve the required control action

    Two-Dimensional Positioning with Machine Learning in Virtual and Real Environments

    Get PDF
    In this paper, a ball-on-plate control system driven only by a neural network agent is presented. Apart from reinforcement learning, no other control solution or support was applied. The implemented device, driven by two servo motors, learned by itself through thousands of iterations how to keep the ball in the center of the resistive sensor. We compared the real-world performance of agents trained in both a real-world and in a virtual environment. We also examined the efficacy of a virtually pre-trained agent fine-tuned in the real environment. The obtained results were evaluated and compared to see which approach makes a good basis for the implementation of a control task implemented purely with a neural network

    Reward-modulated Hebbian plasticity as leverage for partially embodied control in compliant robotics

    Get PDF
    In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics

    Nonlinear suboptimal and adaptive pectoral fin control of autonomous underwater vehicle

    Full text link
    Autonomous underwater vehicles (AUVs) are used for numerous applications in the deep sea, such as hydrographic survey, sea bed mining and oceanographic mapping, etc. Presently, significant amount of effort, is being made in developing biorobotic AUVs (BAUVs) with biologically inspired control surfaces. However, the dynamics of AUVs and BAUVs are highly nonlinear and the hydrodynamic coefficients are not precisely known. As such the development of nonlinear and adaptive control systems is of considerable importance; We consider the suboptimal dive plane control of AUVs using the state-dependent Riccati equation (SDRE) technique. This method provides effective means of designing nonlinear control systems for minimum as well as nonminimum phase AUV models. Moreover, hard control constraints are included in the design process; We also attempt to design adaptive control systems for BAUVs using biologically-inspired pectoral-like fins. The fins are assumed to be oscillating harmonically with a combined linear (sway) and angular (yaw) motion. The bias (mean) angle of the angular motion of the fin is used as a control input. Using discrete-time state variable representation of the BAUV, adaptive sampled-data control systems for the trajectory control are derived using state feedback as well as output feedback. We develop direct as well as indirect adaptive control systems for BAUVs. The advantage of the indirect adaptive law lies in its applicability to minimum as well as nonminimum phase systems. Simulation results are presented to evaluate the performance of each control system

    Low Power Memory/Memristor Devices and Systems

    Get PDF
    This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within

    Novel Observer-Based Suboptimal Digital Tracker for a Class of Time-Delay Singular Systems

    Get PDF
    This paper presents a novel suboptimal digital tracker for a class of time-delay singular systems. First, some existing techniques are utilized to obtain an equivalent regular time-delay system, which has a direct transmission term from input to output. The equivalent regular time-delay system is important as it enables the optimal control theory to be conveniently combined with the digital redesign approach. The linear quadratic performance index, specified in the continuous-time domain, can be discretized into an equivalent decoupled discrete-time performance index using the newly developed extended delay-free model. Additionally, although the extended delay-free model is large, its advantage is the elimination of all delay terms (which included a new extended state vector), simplifying the proposed approach. As a result, the proposed approach can be applied to a class of time-delay singular systems. An illustrative example demonstrates the effectiveness of the proposed design methodology

    Learning-Based Controller Design with Application to a Chiller Process

    Get PDF
    In this thesis, we present and study a few approaches for constructing controllers for uncertain systems, using a combination of classical control theory and modern machine learning methods. The thesis can be divided into two subtopics. The first, which is the focus of the first two papers, is dual control. The second, which is the focus of the third and last paper, is multiple-input multiple-output (MIMO) control of a chiller process. In dual control, the goal is to construct controllers for uncertain systems that in expectation minimize some cost over a certain time horizon. To achieve this, the controller must take into account the dual goals of accumulating more information about the process, by applying some probing input, and using the available information for controlling the system. This is referred to as the exploration-exploitation trade-off. Although optimal dual controllers in theory can be computed by solving a functional equation, this is usually intractable in practice, with only some simple special cases as exceptions. Therefore, it is interesting to examine methods for approximating optimal dual control. In the first paper, we take the approach of approximating the value function, which is the solution of the functional equation that can be used to deduce the optimal control, by using artificial neural networks. In the second paper, neural networks are used to represent and estimate hyperstates, which contain information about the conditional probability distributions of the system uncertainties. The optimal dual controller is a function of the hyperstate, and hence it should be useful to have a representation of this quantity when constructing an approximately optimal dual controller. The hyperstate transition model is used in combination with a reinforcement learning algorithm for constructing a dual controller from stochastic simulations of a system model that includes models of the system uncertainties. In the third paper, we suggest a simple reinforcement learning method that can be used to construct a decoupling matrix that allows MIMO control of a chiller process. Compared to the commonly used single-input single-output (SISO) structures, these controllers can decrease the variations in some system signals. This makes it possible to run the system at operating points closer to some constraints, which in turn can enable more energy-efficient operation
    corecore