523 research outputs found

    On the smoothness of nonlinear system identification

    Full text link
    We shed new light on the \textit{smoothness} of optimization problems arising in prediction error parameter estimation of linear and nonlinear systems. We show that for regions of the parameter space where the model is not contractive, the Lipschitz constant and β\beta-smoothness of the objective function might blow up exponentially with the simulation length, making it hard to numerically find minima within those regions or, even, to escape from them. In addition to providing theoretical understanding of this problem, this paper also proposes the use of multiple shooting as a viable solution. The proposed method minimizes the error between a prediction model and the observed values. Rather than running the prediction model over the entire dataset, multiple shooting splits the data into smaller subsets and runs the prediction model over each subset, making the simulation length a design parameter and making it possible to solve problems that would be infeasible using a standard approach. The equivalence to the original problem is obtained by including constraints in the optimization. The new method is illustrated by estimating the parameters of nonlinear systems with chaotic or unstable behavior, as well as neural networks. We also present a comparative analysis of the proposed method with multi-step-ahead prediction error minimization

    A novel chaotic time series prediction method and its application to carrier vibration interference attitude prediction of stabilized platform

    Get PDF
    Aiming at the problems existing in previous chaos time series prediction methods, a novel chaos times series prediction method, which applies modified GM(1, 1) model with optimizing parameters to study evolution laws of phase point L1 norm in reconstructed phase space, is proposed in this paper. Phase space reconstruction theory is used to reconstruct the unobserved phase space for chaotic time series by C-C method, and L1 norm series of phase points can be obtained in the reconstructed phase space. The modified GM(1, 1) model, which is improved by optimizing background value and optimizing original condition, is used to study the change law of phase point L1 norm for forecasting. The measured data from stabilized platform experiment and three traditional chaos time series are applied to evaluate the performance of the proposed model. To test the prediction method, three accuracy evaluation standards are employed here. The empirical results of stabilized platform are encouraging and indicate that the newly proposed method is excellent in prediction of chaos time series of chaos systems

    Linear Regression and Unsupervised Learning For Tracking and Embodied Robot Control.

    Get PDF
    Computer vision problems, such as tracking and robot navigation, tend to be solved using models of the objects of interest to the problem. These models are often either hard-coded, or learned in a supervised manner. In either case, an engineer is required to identify the visual information that is important to the task, which is both time consuming and problematic. Issues with these engineered systems relate to the ungrounded nature of the knowledge imparted by the engineer, where the systems have no meaning attached to the representations. This leads to systems that are brittle and are prone to failure when expected to act in environments not envisaged by the engineer. The work presented in this thesis removes the need for hard-coded or engineered models of either visual information representations or behaviour. This is achieved by developing novel approaches for learning from example, in both input (percept) and output (action) spaces. This approach leads to the development of novel feature tracking algorithms, and methods for robot control. Applying this approach to feature tracking, unsupervised learning is employed, in real time, to build appearance models of the target that represent the input space structure, and this structure is exploited to partition banks of computationally efficient, linear regression based target displacement estimators. This thesis presents the first application of regression based methods to the problem of simultaneously modeling and tracking a target object. The computationally efficient Linear Predictor (LP) tracker is investigated, along with methods for combining and weighting flocks of LP’s. The tracking algorithms developed operate with accuracy comparable to other state of the art online approaches and with a significant gain in computational efficiency. This is achieved as a result of two specific contributions. First, novel online approaches for the unsupervised learning of modes of target appearance that identify aspects of the target are introduced. Second, a general tracking framework is developed within which the identified aspects of the target are adaptively associated to subsets of a bank of LP trackers. This results in the partitioning of LP’s and the online creation of aspect specific LP flocks that facilitate tracking through significant appearance changes. Applying the approach to the percept action domain, unsupervised learning is employed to discover the structure of the action space, and this structure is used in the formation of meaningful perceptual categories, and to facilitate the use of localised input-output (percept-action) mappings. This approach provides a realisation of an embodied and embedded agent that organises its perceptual space and hence its cognitive process based on interactions with its environment. Central to the proposed approach is the technique of clustering an input-output exemplar set, based on output similarity, and using the resultant input exemplar groupings to characterise a perceptual category. All input exemplars that are coupled to a certain class of outputs form a category - the category of a given affordance, action or function. In this sense the formed perceptual categories have meaning and are grounded in the embodiment of the agent. The approach is shown to identify the relative importance of perceptual features and is able to solve percept-action tasks, defined only by demonstration, in previously unseen situations. Within this percept-action learning framework, two alternative approaches are developed. The first approach employs hierarchical output space clustering of point-to-point mappings, to achieve search efficiency and input and output space generalisation as well as a mechanism for identifying the important variance and invariance in the input space. The exemplar hierarchy provides, in a single structure, a mechanism for classifying previously unseen inputs and generating appropriate outputs. The second approach to a percept-action learning framework integrates the regression mappings used in the feature tracking domain, with the action space clustering and imitation learning techniques developed in the percept-action domain. These components are utilised within a novel percept-action data mining methodology, that is able to discover the visual entities that are important to a specific problem, and to map from these entities onto the action space. Applied to the robot control task, this approach allows for real-time generation of continuous action signals, without the use of any supervision or definition of representations or rules of behaviour

    Algorithmic Modelling of Financial Conditions for Macro Predictive Purposes: Pilot Application to USA Data

    Get PDF
    Aggregate financial conditions indices (FCIs) are constructed to fulfil two aims: (i) The FCIs should resemble non-model-based composite indices in that their composition is adequately invariant for concatenation during regular updates; (ii) the concatenated FCIs should outperform financial variables conventionally used as leading indicators in macro models. Both aims are shown to be attainable once an algorithmic modelling route is adopted to combine leading indicator modelling with the principles of partial least-squares (PLS) modelling, supervised dimensionality reduction, and backward dynamic selection. Pilot results using US data confirm the traditional wisdom that financial imbalances are more likely to induce macro impacts than routine market volatilities. They also shed light on why the popular route of principal-component based factor analysis is ill-suited for the two aims

    SIMULATION-BASED DECISION MODEL TO CONTROL DYNAMIC MANUFACTURING REQUIREMENTS: APPLICATION OF GREY FORECASTING - DQFD

    Get PDF
    Manufacturing systems have to adapt to changing requirements of their internal and external customers. In fact, new requirements may appear unexpectedly and may change multiple times. Change is a straightforward reality of production, and the engineer has to deal with the dynamic work environment. In this perspective, this paper proposes a decision model in order to fit actual and future processes’ needs. The proposed model is based on the dynamic quality function deployment (DQFD), grey forecasting model GM (1,1) and the technique for order preference by similarity to ideal solution (TOPSIS). The cascading QFD-based model is used to show the applicability of the proposed methodology. The simulation results illustrate the effect of the manufacturing needs changes on the strategic, operational and technical improvements

    A novel servo control method based on feedforward control – Fuzzy-grey predictive controller for stabilized and tracking platform system

    Get PDF
    Through analysis of the time-delay characteristics of stabilized and tracking platform position tracking loop and of attitude disturbance exciting in stabilization and tracking platform systems, a compound control method based on adaptive fuzzy-grey prediction control (CAGPC) is proposed to improve the disturbance suppression performance and system response of stabilized and tracking platform system. Firstly, the feedforward controller which is to improve disturbance suppression performance of stabilized and tracking platform servo system and aiming at the external disturbances is introduced. Secondly, aiming at the disadvantages of conventional fixed step size of Fuzzy-grey prediction and the prediction error forecast model has, an adaptive adjustment module adjusting the prediction step and comprehensive error weight at the same time is proposed, according to the actual control system error and the prediction error, the Fuzzy-grey prediction step and the prediction error weights are regulated while to improve the control precision and the adaptability of the system prediction model; At last, Numerical simulation results and the stabilized and tracking platform experimental verification illustrate that the compound control method can improve the stable platform servo system response and the ability of suppress external disturbances and the CAGPC control method has better performance in the stabilized and tracking platform system

    Improving the rainfall nowcast for fine temporal and spatial scales suitable for urban hydrology

    Get PDF
    Accurate Quantitative Precipitation Forecasts (QPF) at high spatial and temporal resolution are crucial for urban flood prediction. Typically, Lagrangian persistence based on radar data is used to nowcast rainfall intensities with up to 3 hours lead time, but nevertheless is not able to deliver reliable QPFs past 20 min lead time (known as well as the predictability limit). Especially, for extreme events causing pluvial floods, accurate QPFs cannot be achieved past 5 min lead time. Furthermore when compared to gauge recordings, the QPFs are not useful at all. There is an essential need to provide better QPFs by improving the rainfall field supplied to the nowcast and by employing non-linear processes for the extrapolation of rainfall into the future. This study is focused on these two main problems, and it investigates different geostatistical and data-driven methods for the improvement of the QPFs at fine scales. The study was conducted within the Hannover radar range where observations between 2000 to 2018 were available. The skill of the nowcast models was assessed on the point (1 km2 and 5 min) and storm scale, based on continuous criteria comparing both radar and gauge observations. A total of 100 gauge measurements inside the study area were as well employed for the assessment. From the period 2000-2012, 93 events of different properties were distinguished and used as a basis for the method development and assessment. Two state-of-the-art nowcast models (HyRaTrac and Lucas-Kanade) were chosen as reference and used as benchmarks for improvement. To improve the rainfall field, a real time merging between radar and gauge data was investigated. Among different merging techniques (mean field bias, quantile bias correction and kriging interpolation), conditional merging (CM) yielded the best rainfall field. When fed to the reference nowcast models, it led to improvements of up to 1 hour of the predictability limit and of the agreement between radar based QPFs and gauge data. To improve the QPF accuracy even further, two different data driven techniques were developed in order to learn non-linear behaviours from past observed rainfall. First, a nearest neighbour approach (k-NN) was developed and employed instead of Lagrangian Persistence on the HyRaTrac nowcast model. The k-NN method accounts for the non-linearity of the storm evolution by consulting k-similar past storms. A deterministic nowcast issued by averaging the behaviours from the 3 most similar storms yielded the best results, extending the predictability limit at the storm scale to 2-3 hours. Second, an ensemble nowcast accounting for the 10 closest neighbours was generated in order to estimate the uncertainty of the QPF. Third, a deep convolution neural network (CNN) was trained on past merged data, in order to learn the non-linearity of the rainfall process. The network based on the last 15 min of observed radar images proved to successfully capture death and decay and partly birth processes, and extended the rainfall predictability limit at the point scale to 3 hours. Lastly, the methods were tested on 17 convective extreme events, extracted from the period 2013-2018, to compare the tested methods for an urban flood nowcast application. The CNN based on merged data outperformed both reference methods as well as the k-NN based nowcast, with the predictability limit reaching 30 – 40 min. The k-NN, although better than the Lagrangian persistence, suffered greatly from the shortcomings of the storm tracking algorithm present under fast moving and extreme storms. To conclude, even though clear improvements were achieved, there is a clear limit to the data-driven methods that cannot be overcome, unless coupled with the convection initialization from Numerical Weather Prediction (NWP) models. Nevertheless, complex relationships learned from past observed data, together with a better rainfall field as input, were proven to be useful in increasing the QPF accuracy and predictability limits for urban hydrology application.Quantitative Niederschlagsvorhersagen (QPF) in hoher räumlicher und zeitlicher Auflösung sind entscheidend für die Prognose urbaner Sturzfluten. Der auf Radardaten basierende Lagrange Ansatz wird typischerweise für Regenintensitätsvorhersagen mit einem Horizont von 3 Stunden verwendet. Zuverlässig ist dieser allerdings nur bis 20 Minuten (bekanntes Prognoselimit). Bei extremen Niederschlagsereignissen, die urbane Sturzfluten verursachen, ist das Limit sogar bereits bei 5 Minuten erreicht. Außerdem kommt es zu deutlichen Abweichungen zwischen der QPF und den Messdaten an Niederschlagsstationen. Eine Verbesserung der QPF ist demnach zwingend erforderlich. Eine solche Verbesserung kann durch die Anpassung des Eingabe-Niederschlagsfeldes und durch die Anwendung nichtlinearer Prozesse für die Extrapolation des Niederschlags erreicht werden. Die vorliegende Studie konzentriert sich auf diese beiden Hauptprobleme und untersucht verschiedene geostatistische und Data-Mining Methoden zur Verbesserung der QPF auf solchen Skalen. Die Studie wurde im Radarbereich von Hannover durchgeführt, wo Beobachtungsdaten von 2000 bis 2018 verfügbar sind. Die Güte der Nowcast-Modelle wurde auf der Punkteskala (1 km2 und 5 min.) anhand kontinuierlicher Kriterien evaluiert und in Relation zu Radar- und Stationsbeobachtungen gesetzt. Hierfür wurden insgesamt 100 Stationsmessungen innerhalb des Untersuchungsgebietes verwendet. Aus dem Zeitraum 2000 bis 2012 wurden 93 Ereignisse mit unterschiedlichen Eigenschaften als Grundlage für die Methodenentwicklung und -beurteilung ausgewertet. Zwei gängige Nowcast-Modelle (HyRaTrac und Lucas-Kanade) wurden als Referenzmodelle ausgewählt und als Maßstab für Verbesserungen eingesetzt. Um das Niederschlagsfeld zu verbessern, wurden Radar- und Stationsdaten in Echtzeit zusammengeführt. Unter den verschiedenen Methoden (Mean Field Bias, Quantile Mapping Bias, Kriging-Interpolation) ergab das Conditional Merging (CM) das optimalste Niederschlagsfeld. Als Input für die beiden Referenzmodelle verwendet, führte das CM zu einer Verlängerung des Prognoselimits auf bis zu eine Stunde. Auch die Übereinstimmung der radargestützten QPF mit den Stationsdaten verbesserte sich. Um das Prognoselimit noch weiter auszudehnen, wurden zwei verschiedene Data-Mining Techniken entwickelt, um die nichtlinearen Verhaltensweisen aus vergangenen Regenfällen zu erlernen: Zunächst wurde ein Nächster-Nachbar-Ansatz (k-NN) entwickelt und anstelle der Lagrange Persistenz im HyRaTrac-Nowcast-Modell eingesetzt. Die k-NN-Methode berücksichtigt die Nichtlinearität der Regensturmentwicklung, indem k-ähnliche vergangene Stürme herangezogen werden. Ein deterministischer Nowcast, der durch Mittelwertbildung der Verhaltensweisen der drei ähnlichsten Stürme erstellt wurde, lieferte die besten Ergebnisse und verlängerte das Prognoselimit auf bis zu zwei-drei Stunden. Ein Ensemble-Nowcast, bei dem die zehn nächsten Nachbarn berücksichtigt wurden, wurde ebenfalls erstellt, um die Unsicherheit des QPF abzuschätzen. Zudem wurde ein künstliches neuronales Netz (CNN) basierend auf vergangenen Daten entwickelt, um die Nichtlinearität des Niederschlagsprozesses zu berücksichtigen. Das neuronale Netz, das mit den beobachteten Radarbildern der letzten 15 Minuten gefüttert wurde, erwies sich als erfolgreich in der Erfassung von Todes-, Zerfalls- und Geburtsprozessen von Stürmen und konnte das Prognoselimit auf bis zu drei Stunden erweitern. Um die Wirksamkeit der entwickelten Methoden für die Vorhersage urbaner Sturzfluten zu untersuchen, wurden sie auf 17 konvektive Extremereignisse aus dem Zeitraum 2013 bis 2018 angewendet. Der k-NN Ansatz war zwar besser als die Lagrange Persistenz, litt aber stark unter den Fehlern des Sturmverfolgungs-Algorithmus bei schnellen und extremen Stürmen. Das CNN übertraf sowohl die Referenzmethoden als auch den k-NN-basierten Nowcast. Das Prognoselimit konnte so von 5 auf 30 bis 40 Minuten erweitert werden. Für eine weitere Verbesserung zeichnete sich letztlich eine klare Grenze ab, die nur mit der Konvektionsinitialisierung aus Numerischen Wettervorhersagemodellen (NWP-Modellen) überwunden werden kann. Im Vergleich mit den ausgewählten Referenzmodellen, können, durch die hier entwickelten Methoden, die Genauigkeit und das Prognoselimit der QPF in der städtischen Hydrologie erheblich verbessert werden

    Risk Management using Model Predictive Control

    Get PDF
    Forward planning and risk management are crucial for the success of any system or business dealing with the uncertainties of the real world. Previous approaches have largely assumed that the future will be similar to the past, or used simple forecasting techniques based on ad-hoc models. Improving solutions requires better projection of future events, and necessitates robust forward planning techniques that consider forecasting inaccuracies. This work advocates risk management through optimal control theory, and proposes several techniques to combine it with time-series forecasting. Focusing on applications in foreign exchange (FX) and battery energy storage systems (BESS), the contributions of this thesis are three-fold. First, a short-term risk management system for FX dealers is formulated as a stochastic model predictive control (SMPC) problem in which the optimal risk-cost profiles are obtained through dynamic control of the dealers’ positions on the spot market. Second, grammatical evolution (GE) is used to automate non-linear time-series model selection, validation, and forecasting. Third, a novel measure for evaluating forecasting models, as a part of the predictive model in finite horizon optimal control applications, is proposed. Using both synthetic and historical data, the proposed techniques were validated and benchmarked. It was shown that the stochastic FX risk management system exhibits better risk management on a risk-cost Pareto frontier compared to rule-based hedging strategies, with up to 44.7% lower cost for the same level of risk. Similarly, for a real-world BESS application, it was demonstrated that the GE optimised forecasting models outperformed other prediction models by at least 9%, improving the overall peak shaving capacity of the system to 57.6%

    Risk Management using Model Predictive Control

    Get PDF
    Forward planning and risk management are crucial for the success of any system or business dealing with the uncertainties of the real world. Previous approaches have largely assumed that the future will be similar to the past, or used simple forecasting techniques based on ad-hoc models. Improving solutions requires better projection of future events, and necessitates robust forward planning techniques that consider forecasting inaccuracies. This work advocates risk management through optimal control theory, and proposes several techniques to combine it with time-series forecasting. Focusing on applications in foreign exchange (FX) and battery energy storage systems (BESS), the contributions of this thesis are three-fold. First, a short-term risk management system for FX dealers is formulated as a stochastic model predictive control (SMPC) problem in which the optimal risk-cost profiles are obtained through dynamic control of the dealers’ positions on the spot market. Second, grammatical evolution (GE) is used to automate non-linear time-series model selection, validation, and forecasting. Third, a novel measure for evaluating forecasting models, as a part of the predictive model in finite horizon optimal control applications, is proposed. Using both synthetic and historical data, the proposed techniques were validated and benchmarked. It was shown that the stochastic FX risk management system exhibits better risk management on a risk-cost Pareto frontier compared to rule-based hedging strategies, with up to 44.7% lower cost for the same level of risk. Similarly, for a real-world BESS application, it was demonstrated that the GE optimised forecasting models outperformed other prediction models by at least 9%, improving the overall peak shaving capacity of the system to 57.6%

    Digital image compression

    Get PDF
    • …
    corecore