34 research outputs found

    Ship steering control using feedforward neural networks

    Get PDF
    One significant problem in the design of ship steering control systems is that the dynamics of the vessel change with operating conditions such as the forward speed of the vessel, the depth of the water and loading conditions etc. Approaches considered in the past to overcome these difficulties include the use of self adaptive control systems which adjust the control characteristics on a continuous basis to suit the current operating conditions. Artificial neural networks have been receiving considerable attention in recent years and have been considered for a variety of applications where the characteristics of the controlled system change significantly with operating conditions or with time. Such networks have a configuration which remains fixed once the training phase is complete. The resulting controlled systems thus have more predictable characteristics than those which are found in many forms of traditional self-adaptive control systems. In particular, stability bounds can be investigated through simulation studies as with any other form of controller having fixed characteristics. Feedforward neural networks have enjoyed many successful applications in the field of systems and control. These networks include two major categories: multilayer perceptrons and radial basis function networks. In this thesis, we explore the applicability of both of these artificial neural network architectures for automatic steering of ships in a course changing mode of operation. The approach that has been adopted involves the training of a single artificial neural network to represent a series of conventional controllers for different operating conditions. The resulting network thus captures, in a nonlinear fashion, the essential characteristics of all of the conventional controllers. Most of the artificial neural network controllers developed in this thesis are trained with the data generated through simulation studies. However, experience is also gained of developing a neuro controller on the basis of real data gathered from an actual scale model of a supply ship. Another important aspect of this work is the applicability of local model networks for modelling the dynamics of a ship. Local model networks can be regarded as a generalized form of radial basis function networks and have already proved their worth in a number of applications involving the modelling of systems in which the dynamic characteristics can vary significantly with the system operating conditions. The work presented in this thesis indicates that these networks are highly suitable for modelling the dynamics of a ship

    Dynamic non-linear system modelling using wavelet-based soft computing techniques

    Get PDF
    The enormous number of complex systems results in the necessity of high-level and cost-efficient modelling structures for the operators and system designers. Model-based approaches offer a very challenging way to integrate a priori knowledge into the procedure. Soft computing based models in particular, can successfully be applied in cases of highly nonlinear problems. A further reason for dealing with so called soft computational model based techniques is that in real-world cases, many times only partial, uncertain and/or inaccurate data is available. Wavelet-Based soft computing techniques are considered, as one of the latest trends in system identification/modelling. This thesis provides a comprehensive synopsis of the main wavelet-based approaches to model the non-linear dynamical systems in real world problems in conjunction with possible twists and novelties aiming for more accurate and less complex modelling structure. Initially, an on-line structure and parameter design has been considered in an adaptive Neuro- Fuzzy (NF) scheme. The problem of redundant membership functions and consequently fuzzy rules is circumvented by applying an adaptive structure. The growth of a special type of Fungus (Monascus ruber van Tieghem) is examined against several other approaches for further justification of the proposed methodology. By extending the line of research, two Morlet Wavelet Neural Network (WNN) structures have been introduced. Increasing the accuracy and decreasing the computational cost are both the primary targets of proposed novelties. Modifying the synoptic weights by replacing them with Linear Combination Weights (LCW) and also imposing a Hybrid Learning Algorithm (HLA) comprising of Gradient Descent (GD) and Recursive Least Square (RLS), are the tools utilised for the above challenges. These two models differ from the point of view of structure while they share the same HLA scheme. The second approach contains an additional Multiplication layer, plus its hidden layer contains several sub-WNNs for each input dimension. The practical superiority of these extensions is demonstrated by simulation and experimental results on real non-linear dynamic system; Listeria Monocytogenes survival curves in Ultra-High Temperature (UHT) whole milk, and consolidated with comprehensive comparison with other suggested schemes. At the next stage, the extended clustering-based fuzzy version of the proposed WNN schemes, is presented as the ultimate structure in this thesis. The proposed Fuzzy Wavelet Neural network (FWNN) benefitted from Gaussian Mixture Models (GMMs) clustering feature, updated by a modified Expectation-Maximization (EM) algorithm. One of the main aims of this thesis is to illustrate how the GMM-EM scheme could be used not only for detecting useful knowledge from the data by building accurate regression, but also for the identification of complex systems. The structure of FWNN is based on the basis of fuzzy rules including wavelet functions in the consequent parts of rules. In order to improve the function approximation accuracy and general capability of the FWNN system, an efficient hybrid learning approach is used to adjust the parameters of dilation, translation, weights, and membership. Extended Kalman Filter (EKF) is employed for wavelet parameters adjustment together with Weighted Least Square (WLS) which is dedicated for the Linear Combination Weights fine-tuning. The results of a real-world application of Short Time Load Forecasting (STLF) further re-enforced the plausibility of the above technique

    Optimal test signal design and estimation for dynamic powertrain calibration and control

    Get PDF
    With the dramatic development of the automotive industry and global economy, the motor vehicle has become an indispensable part of daily life. Because of the intensive competition, vehicle manufacturers are investing a large amount of money and time on research in improving the vehicle performance, reducing fuel consumption and meeting the legislative requirement of environmental protection. Engine calibration is a fundamental process of determining the vehicle performance in diverse working conditions. Control maps are developed in the calibration process which must be conducted across the entire operating region before being implemented in the engine control unit to regulate engine parameters at the different operating points. The traditional calibration method is based on steady-state (pseudo-static) experiments on the engine. The primary challenge for the process is the testing and optimisation time that each increases exponentially with additional calibration parameters and control objectives. This thesis presents a basic dynamic black-box model-based calibration method for multivariable control and the method is applied experimentally on a gasoline turbocharged direct injection (GTDI) 2.0L virtual engine. Firstly the engine is characterized by dynamic models. A constrained numerical optimization of fuel consumption is conducted on the models and the optimal data is thus obtained and validated on the virtual system to ensure the accuracy of the models. A dynamic optimization is presented in which the entire data sequence is divided into segments then optimized separately in order to enhance the computational efficiency. A dynamic map is identified using the inverse optimal behaviour. The map is shown to be capable of providing a minimized fuel consumption and generally meeting the demands of engine torque and air-fuel-ratio. The control performance of this feedforward map is further improved by the addition of a closed loop controller. An open loop compensator for torque control and a Smith predictor for air-fuel-ratio control are designed and shown to solve the issues of practical implementation on production engines. A basic pseudo-static engine-based calibration is generated for comparative purposes and the resulting static map is implemented in order to compare the fuel consumption and torque and air-fuel-ratio control with that of the proposed dynamic calibration method. Methods of optimal test signal design and parameter estimation for polynomial models are particularly detailed and studied in this thesis since polynomial models are frequently used in the process of dynamic calibration and control. Because of their ease of implementation, the input designs with different objective functions and optimization algorithms are discussed. Novel design criteria which lead to an improved parameter estimation and output prediction method are presented and verified using identified models of a 1.6L Zetec engine developed from test data obtained on the Liverpool University Powertrain Laboratory. Practical amplitude and rate constraints in engine experiments are considered in the optimization and optimal inputs are further validated to be effective in the black box modelling of the virtual engine. An additional experiment of input design for a MIMO model is presented based on a weighted optimization method. Besides the prediction error based estimation method, a simulation error based estimation method is proposed. This novel method is based on an unconstrained numerical optimization and any output fitness criterion can be used as the objective function. The effectiveness is also evaluated in a black box engine modelling and parameter estimations with a better output fitness of a simulation model are provided

    The Conflict Between Equilibrium and Disequilibrium Theories: The Case of the U.S. Labor Market

    Get PDF
    A fundamental controversy in labor economics is whether unemployment is better viewed as an equilibrium or disequilibrium phenomenon. The authors contend that answers to policy problems related to unemployment will depend on which of the two characterizations of the labor market is accepted. They note the effects of inflation, taxes, and unionization on unemployment and describe those factors\u27 effects on the equilibrium/disequilibrium question by presenting both equilibrium and disequilibrium models of the U.S. labor market.https://research.upjohn.org/up_press/1115/thumbnail.jp

    Advanced Underground Space Technology

    Get PDF
    The recent development of underground space technology makes underground space a potential and feasible solution to climate change, energy shortages, the growing population, and the demands on urban space. Advances in material science, information technology, and computer science incorporating traditional geotechnical engineering have been extensively applied to sustainable and resilient underground space applications. The aim of this Special Issue, entitled “Advanced Underground Space Technology”, is to gather original fundamental and applied research related to the design, construction, and maintenance of underground space

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Validation and Sensitivity Analysis of an Optimization Platform for Efficient Energy-plus Buildings

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Centro Tecnológico. Engenharia de Controle e Automação.A otimização de desempenho tornou-se uma abordagem padrão para o projeto de controle de sistemas de energia. Na França, país onde foi realizado o projeto de fim de curso, essa mesma atividade de pesquisa acontece no setor da construção Civil. Neste segmento, o maior setor consumidor de energia é o setor residencial terciário, o que torna óbvio um esforço em novas soluções, permitindo um ganho econômico e ambiental. A concepção de prédios e edifícios baseia-se geralmente em uma abordagem sequencial, que primeiro visa a redução da demanda, depois a eficiência energética e finalmente recorre à utilização de energias renováveis. Ao analisar todos os componentes em conjunto, como no caso do projeto holístico de construção, é possível obter um ótimo global, que difere em muitos aspectos do obtido pela segmentação das fases. Para isso, é necessário otimizar o projeto considerando o edifício como um todo, levando em consideração seu revestimento e isolação, sistema de trocas de calor, ocupação e o impacto ambiental, tudo isso durante a sua vida útil. Existem dificuldades significativas na abordagem holística, por exemplo, o tempo de cálculo devido à complexidade, a simulação dinâmica do edifício, a explosão combinatória ligada às variáveis de decisão e parâmetros de otimização, incertezas, etc. O uso de técnicas como modelos substitutos e computação paralela auxiliam na resolução de alguns destes problemas. Como muitos aspectos devem ser considerados, a otimização é multi-objetiva e não-linear, no entanto, para aplicações práticas, a computação deve ser feita em tempo razoável. Além disso, há a necessidade de intercambiabilidade de diferentes ferramentas (para modelagem, otimização, tomada de decisão, por exemplo). O projeto INTENSE apresenta uma metodologia e o desenvolvimento de um ambiente de programação para a referida otimização holística dos edifícios. Este estágio está situado no final do projeto, ao passo que a validação para a metodologia e para a ferramenta desenvolvida deve ser fornecida, assim como uma análise de sensibilidade.Performance optimization has become a standard approach for the design and control of energy systems. This same research activity happens in the building sector. It is well known that in France, the greatest energy consuming sector is the tertiary residential sector, which makes obvious to focus the effort in new solutions, allowing an economic and environment gain. The design of buildings is usually based on a sequential approach, which first accesses the demand reduction, then checks the energy efficiency and finally targets the use of renewable energy. When analysing all components together, as in the case of a holistic building design, it is possible to obtain a global optimum, that differs in many ways from the one obtained by segmenting the phases. In order to do that, it is necessary to optimize the design considering the building as a whole, by taking into consideration its envelope, system, occupancy and the environmental impact, all of it over the buildings life time. There are significant difficulties on the holistic approach, for instance, calculation time due to the complexity, dynamics simulation of the building, the combinatorial explosion linked to the decision variables and parameters on the optimization, uncertainties, etc. The use of techniques such as surrogate models and parallel computing brings an aid to solve some of the issues. Since many aspects are to be considered, the optimization is multi-objective and nonlinear, however, for practical applications, the computation has to be done in reasonable time. Moreover, there is the necessity of interchangeability of different tools (for modeling, optimizing, decision making, for instance). Project INTENSE presents a methodology and the development of a programming environ- ment for such holistic optimization of buildings. This internship is placed at the end of the project, whereas validation for the methodology and for the developed tool has to be provided, as well as a sensitivity analysis has to be made

    Fuzzy Controllers

    Get PDF
    Trying to meet the requirements in the field, present book treats different fuzzy control architectures both in terms of the theoretical design and in terms of comparative validation studies in various applications, numerically simulated or experimentally developed. Through the subject matter and through the inter and multidisciplinary content, this book is addressed mainly to the researchers, doctoral students and students interested in developing new applications of intelligent control, but also to the people who want to become familiar with the control concepts based on fuzzy techniques. Bibliographic resources used to perform the work includes books and articles of present interest in the field, published in prestigious journals and publishing houses, and websites dedicated to various applications of fuzzy control. Its structure and the presented studies include the book in the category of those who make a direct connection between theoretical developments and practical applications, thereby constituting a real support for the specialists in artificial intelligence, modelling and control fields

    Uncertainty and sensitivity analysis for long-running computer codes : a critical review

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010."February 2010." Cataloged from PDF version of thesis.Includes bibliographical references (p. 137-146).This thesis presents a critical review of existing methods for performing probabilistic uncertainty and sensitivity analysis for complex, computationally expensive simulation models. Uncertainty analysis (UA) methods reviewed include standard Monte Carlo simulation, Latin Hypercube sampling, importance sampling, line sampling, and subset simulation. Sensitivity analysis (SA) methods include scatter plots, Monte Carlo filtering, regression analysis, variance-based methods (Sobol' sensitivity indices and Sobol' Monte Carlo algorithms), and Fourier amplitude sensitivity tests. In addition, this thesis reviews several existing metamodeling techniques that are intended provide quick-running approximations to the computer models being studied. Because stochastic simulation-based UA and SA rely on a large number (e.g., several thousands) of simulations, metamodels are recognized as a necessary compromise when UA and SA must be performed with long-running (i.e., several hours or days per simulation) computational models. This thesis discusses the use of polynomial Response Surfaces (RS), Artificial Neural Networks (ANN), and Kriging/Gaussian Processes (GP) for metamodeling. Moreover, two methods are discussed for estimating the uncertainty introduced by the metamodel. The first of these methods is based on a bootstrap sampling procedure, and can be utilized for any metamodeling technique.(cont.) The second method is specific to GP models, and is based on a Bayesian interpretation of the underlying stochastic process. Finally, to demonstrate the use of these methods, the results from two case studies involving the reliability assessment of passive nuclear safety systems are presented. The general conclusions of this work are that polynomial RSs are frequently incapable of adequately representing the complex input/output behavior exhibited by many mechanistic models. In addition, the goodness-of- fit of the RS should not be misinterpreted as a measure of the predictive capability of the metamodel, since RSs are necessarily biased predictors for deterministic computer models. Furthermore, the extent of this bias is not measured by standard goodness-of-fit metrics (e.g., coefficient of determination, R 2), so these methods tend to provide overly optimistic indications of the quality of the metamodel. The bootstrap procedure does provide indication as to the extent of this bias, with the bootstrap confidence intervals for the RS estimates generally being significantly wider than those of the alternative metamodeling methods. It has been found that the added flexibility afforded by ANNs and GPs can make these methods superior for approximating complex models. In addition, GPs are exact interpolators, which is an important feature when the underlying computer model is deterministic (i.e., when there is no justification for including a random error component in the metamodel). On the other hand, when the number of observations from the computer model is sufficiently large, all three methods appear to perform comparably, indicating that in such cases, RSs can still provide useful approximations.by Dustin R. Langewisch.S.M
    corecore