638 research outputs found

    Learning and Control of Dynamical Systems

    Get PDF
    Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise. In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems. We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D

    Deep Learning-Based Wave Digital Modeling of Rate-Dependent Hysteretic Nonlinearities for Virtual Analog Applications

    Get PDF
    Electromagnetic components greatly contribute to the peculiar timbre of analog audio gear. Indeed, distortion effects due to the nonlinear behavior of magnetic materials are known to play an important role in enriching the harmonic content of an audio signal. However, despite the abundant research that has been devoted to the characterization of nonlinearities in the context of virtual analog modeling over the years, the discrete-time simulation of circuits exhibiting rate-dependent hysteretic phenomena remains an open challenge. In this article, we present a novel data-driven approach for the wave digital modeling of rate-dependent hysteresis using recurrent neural networks (RNNs). Thanks to the modularity of wave digital filters, we are able to locally characterize the wave scattering relations of a hysteretic reluctance by encapsulating an RNN-based model into a single one-port wave digital block. Hence, we successfully apply the proposed methodology to the emulation of the output stage of a vacuum-tube guitar amplifier featuring a nonlinear transformer

    From model-driven to data-driven : a review of hysteresis modeling in structural and mechanical systems

    Get PDF
    Hysteresis is a natural phenomenon that widely exists in structural and mechanical systems. The characteristics of structural hysteretic behaviors are complicated. Therefore, numerous methods have been developed to describe hysteresis. In this paper, a review of the available hysteretic modeling methods is carried out. Such methods are divided into: a) model-driven and b) datadriven methods. The model-driven method uses parameter identification to determine parameters. Three types of parametric models are introduced including polynomial models, differential based models, and operator based models. Four algorithms as least mean square error algorithm, Kalman filter algorithm, metaheuristic algorithms, and Bayesian estimation are presented to realize parameter identification. The data-driven method utilizes universal mathematical models to describe hysteretic behavior. Regression model, artificial neural network, least square support vector machine, and deep learning are introduced in turn as the classical data-driven methods. Model-data driven hybrid methods are also discussed to make up for the shortcomings of the two methods. Based on a multi-dimensional evaluation, the existing problems and open challenges of different hysteresis modeling methods are discussed. Some possible research directions about hysteresis description are given in the final section

    An AI-based solution for wireless channel interference prediction and wireless remote control

    Get PDF
    Abstract. Most control systems rely on wired connectivity between controllers and plants due to their need for fast and reliable real-time control. Yet the demand for mobility, scalability, low operational and maintenance costs call for wireless networked control system designs. Naturally, over-the-air communication is susceptible to interference and fading and therefore, enabling low latency and high reliability is crucial for wireless control scenarios. In this view, the work of this thesis aims to enhance reliability of the wireless communication and to optimize the energy consumption while maintaining low latency and the stability of the controller-plant system. To achieve this goal, two core abstractions have been used, a neural wireless channel interference predictor and a neural predictive controller. This neural predictor design is motivated by the capability of machine learning in assimilating underlying patterns and dynamics of systems using the observed data. The system model is composed of a controller-plant scheme on which the controller transmits control signals wirelessly. The neural wireless predictor and the neural controller predict wireless channel interference and plant states, respectively. This information is used to optimize energy consumption and prevent communication outages while controlling the plant. This thesis presents the development of the neural wireless predictor, the neural controller and a neural plant. Interaction and functionality of these elements are demonstrated using a Simulink simulation. Results of simulation illustrate the effectiveness of neural networks in both control and wireless domain. The proposed solution yields about 17% reduction in energy consumption compared to state-of-the-art designs by minimizing the impact of interference in the control links while ensuring plant stability

    Event-based decision support algorithm for real-time flood forecasting in urban drainage systems using machine learning modelling

    Get PDF
    Urban flooding is a major problem for cities around the world, with significant socio-economic consequences. Conventional real-time flood forecasting models rely on continuous time-series data and often have limited accuracy, especially for longer lead times than 2 hrs. This study proposes a novel event-based decision support algorithm for real-time flood forecasting using event-based data identification, event-based dataset generation, and a real-time decision tree flowchart using machine learning models. The results of applying the framework to a real-world case study demonstrate higher accuracy in forecasting water level rise, especially for longer lead times (e.g., 2–3 hrs), compared to traditional models. The proposed framework reduces root mean square error by 50%, increases accuracy of flood forecasting by 50%, and improves normalised Nash–Sutcliffe error by 20%. The proposed event-based dataset framework can significantly enhance the accuracy of flood forecasting, reducing the occurrences of both false alarms and flood missing and improving emergency response systems

    Essay on international economics

    Get PDF
    This thesis should appeal to several audiences. The literature reviews and empirical examinations will aid economists and academic researchers in navigating the literature and will be valuable for their work. Practitioners and forecasters at central banks and commercial companies are likewise interested in learning which predictors, models, and approaches accurately estimate currency rates. Policymakers, for whom the success of policy choices depends heavily on accurate projections, should also be interested in our review of the current state of the research. Lastly, the regular coverage of exchange rate predictions in the media suggests that this study might be applicable outside academic and policy circles. This thesis studies two aspects of international economics: international finance and international trade, and it is organised as follows: Part I provides an in-depth description of the background research that formed the basis for this thesis. Part II consists of three empirically-based original chapters that are independent of one another and each make a unique contribution to the international economics literature. In the Appendix, more technical theories, such as machine learning and decomposition analysis, are described in greater detail
    • …
    corecore