1,679 research outputs found

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    The wavelet-NARMAX representation : a hybrid model structure combining polynomial models with multiresolution wavelet decompositions

    Get PDF
    A new hybrid model structure combing polynomial models with multiresolution wavelet decompositions is introduced for nonlinear system identification. Polynomial models play an important role in approximation theory, and have been extensively used in linear and nonlinear system identification. Wavelet decompositions, in which the basis functions have the property of localization in both time and frequency, outperform many other approximation schemes and offer a flexible solution for approximating arbitrary functions. Although wavelet representations can approximate even severe nonlinearities in a given signal very well, the advantage of these representations can be lost when wavelets are used to capture linear or low-order nonlinear behaviour in a signal. In order to sufficiently utilise the global property of polynomials and the local property of wavelet representations simultaneously, in this study polynomial models and wavelet decompositions are combined together in a parallel structure to represent nonlinear input-output systems. As a special form of the NARMAX model, this hybrid model structure will be referred to as the WAvelet-NARMAX model, or simply WANARMAX. Generally, such a WANARMAX representation for an input-output system might involve a large number of basis functions and therefore a great number of model terms. Experience reveals that only a small number of these model terms are significant to the system output. A new fast orthogonal least squares algorithm, called the matching pursuit orthogonal least squares (MPOLS) algorithm, is also introduced in this study to determine which terms should be included in the final model

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Forecasting Models for Integration of Large-Scale Renewable Energy Generation to Electric Power Systems

    Get PDF
    Amid growing concerns about climate change and non-renewable energy sources deple¬tion, vari¬able renewable energy sources (VRESs) are considered as a feasible substitute for conventional environment-polluting fossil fuel-based power plants. Furthermore, the transition towards clean power systems requires additional transmission capacity. Dynamic thermal line rating (DTLR) is being considered as a potential solution to enhance the current transmission line capacity and omit/postpone transmission system expansion planning, while DTLR is highly dependent on weather variations. With increasing the accommodation of VRESs and application of DTLR, fluctuations and variations thereof impose severe and unprecedented challenges on power systems operation. Therefore, short-term forecasting of large-scale VERSs and DTLR play a crucial role in the electric power system op¬eration problems. To this end, this thesis devotes on developing forecasting models for two large-scale VRESs types (i.e., wind and tidal) and DTLR. Deterministic prediction can be employed for a variety of power system operation problems solved by deterministic optimization. Also, the outcomes of deterministic prediction can be employed for conditional probabilistic prediction, which can be used for modeling uncertainty, used in power system operation problems with robust optimization, chance-constrained optimization, etc. By virtue of the importance of deterministic prediction, deterministic prediction models are developed. Prevalently, time-frequency decomposition approaches are adapted to decompose the wind power time series (TS) into several less non-stationary and non-linear components, which can be predicted more precisely. However, in addition to non-stationarity and nonlinearity, wind power TS demonstrates chaotic characteristics, which reduces the predictability of the wind power TS. In this regard, a wind power generation prediction model based on considering the chaosity of the wind power generation TS is addressed. The model consists of a novel TS decomposition approach, named multi-scale singular spectrum analysis (MSSSA), and least squares support vector machines (LSSVMs). Furthermore, deterministic tidal TS prediction model is developed. In the proposed prediction model, a variant of empirical mode decomposition (EMD), which alleviates the issues associated with EMD. To further improve the prediction accuracy, the impact of different components of wind power TS with different frequencies (scales) in the spatiotemporal modeling of the wind farm is assessed. Consequently, a multiscale spatiotemporal wind power prediction is developed, using information theory-based feature selection, wavelet decomposition, and LSSVM. Power system operation problems with robust optimization and interval optimization require prediction intervals (PIs) to model the uncertainty of renewables. The advanced PI models are mainly based on non-differentiable and non-convex cost functions, which make the use of heuristic optimization for tuning a large number of unknown parameters of the prediction models inevitable. However, heuristic optimization suffers from several issues (e.g., being trapped in local optima, irreproducibility, etc.). To this end, a new wind power PI (WPPI) model, based on a bi-level optimization structure, is put forward. In the proposed WPPI, the main unknown parameters of the prediction model are globally tuned based on optimizing a convex and differentiable cost function. In line with solving the non-differentiability and non-convexity of PI formulation, an asymmetrically adaptive quantile regression (AAQR) which benefits from a linear formulation is proposed for tidal uncertainty modeling. In the prevalent QR-based PI models, for a specified reliability level, the probabilities of the quantiles are selected symmetrically with respect the median probability. However, it is found that asymmetrical and adaptive selection of quantiles with respect to median can provide more efficient PIs. To make the formulation of AAQR linear, extreme learning machine (ELM) is adapted as the prediction engine. Prevalently, the parameters of activation functions in ELM are selected randomly; while different sets of random values might result in dissimilar prediction accuracy. To this end, a heuristic optimization is devised to tune the parameters of the activation functions. Also, to enhance the accuracy of probabilistic DTLR, consideration of latent variables in DTLR prediction is assessed. It is observed that convective cooling rate can provide informative features for DTLR prediction. Also, to address the high dimensional feature space in DTLR, a DTR prediction based on deep learning and consideration of latent variables is put forward. Numerical results of this thesis are provided based on realistic data. The simulations confirm the superiority of the proposed models in comparison to traditional benchmark models, as well as the state-of-the-art models
    corecore