829 research outputs found

    Nonlinear Dynamic System Identification in the Spectral Domain Using Particle-Bernstein Polynomials

    Get PDF
    System identification (SI) is the discipline of inferring mathematical models from unknown dynamic systems using the input/output observations of such systems with or without prior knowledge of some of the system parameters. Many valid algorithms are available in the literature, including Volterra series expansion, Hammerstein–Wiener models, nonlinear auto-regressive moving average model with exogenous inputs (NARMAX) and its derivatives (NARX, NARMA). Different nonlinear estimators can be used for those algorithms, such as polynomials, neural networks or wavelet networks. This paper uses a different approach, named particle-Bernstein polynomials, as an estimator for SI. Moreover, unlike the mentioned algorithms, this approach does not operate in the time domain but rather in the spectral components of the signals through the use of the discrete Karhunen–Loève transform (DKLT). Some experiments are performed to validate this approach using a publicly available dataset based on ground vibration tests recorded from a real F-16 aircraft. The experiments show better results when compared with some of the traditional algorithms, especially for large, heterogeneous datasets such as the one used. In particular, the absolute error obtained with the prosed method is 63% smaller with respect to NARX and from 42% to 62% smaller with respect to various artificial neural network-based approaches

    an acquisition system of in house parameters from wireless sensors for the identification of an environmental model

    Get PDF
    Abstract This paper presents a system for the acquisition of in-house parameters, such as temperature, pressure, humidity and so on, that can be used for the intelligent control of a building. The main objective of this work is to determine an environmental model of an in-house room using machine learning techniques. The system is based on a low data-rate network of sensing and control nodes to acquire the data, realized with a new protocol, called ToLHnet, that is able to employ both wired and wireless communication on different media. Several standard machine learning techniques, namely linear regression, classification and regression tree algorithm, support vector machine, have been used for the regression of the input-output thermal model. Additionally, a recently proposed new technique named particle-Bernstein polynomial has been successfully applied. Experimental results show that this technique outperforms the previous techniques, for both accuracy and computation time

    From model-driven to data-driven : a review of hysteresis modeling in structural and mechanical systems

    Get PDF
    Hysteresis is a natural phenomenon that widely exists in structural and mechanical systems. The characteristics of structural hysteretic behaviors are complicated. Therefore, numerous methods have been developed to describe hysteresis. In this paper, a review of the available hysteretic modeling methods is carried out. Such methods are divided into: a) model-driven and b) datadriven methods. The model-driven method uses parameter identification to determine parameters. Three types of parametric models are introduced including polynomial models, differential based models, and operator based models. Four algorithms as least mean square error algorithm, Kalman filter algorithm, metaheuristic algorithms, and Bayesian estimation are presented to realize parameter identification. The data-driven method utilizes universal mathematical models to describe hysteretic behavior. Regression model, artificial neural network, least square support vector machine, and deep learning are introduced in turn as the classical data-driven methods. Model-data driven hybrid methods are also discussed to make up for the shortcomings of the two methods. Based on a multi-dimensional evaluation, the existing problems and open challenges of different hysteresis modeling methods are discussed. Some possible research directions about hysteresis description are given in the final section

    A Novel Data-Driven Modeling and Control Design Method for Autonomous Vehicles

    Get PDF
    This paper presents a novel modeling method for the control design of autonomous vehicle systems. The goal of the method is to provide a control-oriented model in a predefined Linear Parameter Varying (LPV) structure. The scheduling variables of the LPV model through machine-learning-based methods using a big dataset are selected. Moreover, the LPV model parameters through an optimization algorithm are computed, with which accurate fitting on the dataset is achieved. The proposed method is illustrated on the nonlinear modeling of the lateral vehicle dynamics. The resulting LPV-based vehicle model is used for the control design of path following functionality of autonomous vehicles. The effectiveness of the modeling and control design methods through comprehensive simulation examples based on a high-fidelity simulation software are illustrated

    Design and implementation of machine learning techniques for modeling and managing battery energy storage systems

    Get PDF
    The fast technological evolution and industrialization that have interested the humankind since the fifties has caused a progressive and exponential increase of CO2 emissions and Earth temperature. Therefore, the research community and the political authorities have recognized the need of a deep technological revolution in both the transportation and the energy distribution systems to hinder climate changes. Thus, pure and hybrid electric powertrains, smart grids, and microgrids are key technologies for achieving the expected goals. Nevertheless, the development of the above mentioned technologies require very effective and performing Battery Energy Storage Systems (BESSs), and even more effective Battery Management Systems (BMSs). Considering the above background, this Ph.D. thesis has focused on the development of an innovative and advanced BMS that involves the use of machine learning techniques for improving the BESS effectiveness and efficiency. Great attention has been paid to the State of Charge (SoC) estimation problem, aiming at investigating solutions for achieving more accurate and reliable estimations. To this aim, the main contribution has concerned the development of accurate and flexible models of electrochemical cells. Three main modeling requirements have been pursued for ensuring accurate SoC estimations: insight on the cell physics, nonlinear approximation capability, and flexible system identification procedures. Thus, the research activity has aimed at fulfilling these requirements by developing and investigating three different modeling approaches, namely black, white, and gray box techniques. Extreme Learning Machines, Radial Basis Function Neural Networks, and Wavelet Neural Networks were considered among the black box models, but none of them were able to achieve satisfactory SoC estimation performances. The white box Equivalent Circuit Models (ECMs) have achieved better results, proving the benefit that the insight on the cell physics provides to the SoC estimation task. Nevertheless, it has appeared clear that the linearity of ECMs has reduced their effectiveness in the SoC task. Thus, the gray box Neural Networks Ensemble (NNE) and the white box Equivalent Neural Networks Circuit (ENNC) models have been developed aiming at exploiting the neural networks theory in order to achieve accurate models, ensuring at the same time very flexible system identification procedures together with nonlinear approximation capabilities. The performances of NNE and ENNC have been compelling. In particular, the white box ENNC has reached the most effective performances, achieving accurate SoC estimations, together with a simple architecture and a flexible system identification procedure. The outcome of this thesis makes it possible the development of an interesting scenario in which a suitable cloud framework provides remote assistance to several BMSs in order to adapt the managing algorithms to the aging of BESSs, even considering different and distinct applications

    GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology

    Get PDF
    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by 1\sim 1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the S\'{e}rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.Comment: 32 pages + 15 pages of technical appendices; 28 figures; submitted to MNRAS; latest version has minor updates in presentation of 4 figures, no changes in content or conclusion

    Parameterizing and Aggregating Activation Functions in Deep Neural Networks

    Get PDF
    The nonlinear activation functions applied by each neuron in a neural network are essential for making neural networks powerful representational models. If these are omitted, even deep neural networks reduce to simple linear regression due to the fact that a linear combination of linear combinations is still a linear combination. In much of the existing literature on neural networks, just one or two activation functions are selected for the entire network, even though the use of heterogenous activation functions has been shown to produce superior results in some cases. Even less often employed are activation functions that can adapt their nonlinearities as network parameters along with standard weights and biases. This dissertation presents a collection of papers that advance the state of heterogenous and parameterized activation functions. Contributions of this dissertation include three novel parametric activation functions and applications of each, a study evaluating the utility of the parameters in parametric activation functions, an aggregated activation approach to modeling time-series data as an alternative to recurrent neural networks, and an improvement upon existing work that aggregates neuron inputs using product instead of sum
    corecore