19 research outputs found
Neural networks for non-linear adaptive filtering
Neural networks are shown to be a class of non-linear adaptive filters,
which can be trained permanently with a possibly infinite number of timeordered
examples ; this is an altogether différent framework from the
usual, non-adaptive training of neural networks . A family of new gradientbased
algorithms is proposed.Nous introduisons une famille d'algorithmes adaptatifs permettant l'utilisation de réseaux de neurones comme filtres adaptatifs non linéaires, systèmes susceptibles de subir un apprentissage permanent à partir d'un nombre éventuellement infini d'exemples présentés dans un ordre déterminé. Ces algorithmes, fondés sur des techniques d'évaluation du gradient d'une fonction de coût, s'inscrivent dans un cadre différent de celui de l'apprentissage classique des réseaux de neurones, qui est habituellement non adaptati
Non-Linear Recursive Identification And Control By Neural Networks: A General Framework
INTRODUCTION The development of engineering applications of neural networks makes it necessary to clarify the similarities and differences between the concepts and methods developed for neural networks and those used in more classical fields such as filtering and control. In previous papers [Nerrand et al. 1993], [Marcos et al. 1993], the relationships between non-linear adaptive filters and neural networks have been investigated, and a general framework has been introduced, which encompasses the recursive training of neural networks and the adaptation of non-linear filters. Out of this approach, three new families of training algorithms for feedback networks emerged; algorithms used routinely in adaptive filtering and in the training of neural networks were shown to be specific cases of this general approach. The adaptive identification of non-linear processes is a natural field of application of these algorithms. The first part of the paper will be devoted t
Training Recurrent Neural Networks: Why and How ? An Illustration in Dynamical Process Modeling.
The paper first summarizes a general approach to the training of recurrent neural networks by gradient-based algorithms, which leads to the introduction of four families of training algorithms. Because of the variety of possibilities thus available to the "neural network designer", the choice of the appropriate algorithm to solve a given problem becomes critical. We show that, in the case of process modeling, this choice depends on how noise interferes with the process to be modeled; this is evidenced by three examples of modeling of dynamical processes, where the detrimental effect of inappropriate training algorithms on the prediction error made by the network is clearly demonstrated. 1 INTRODUCTION During the past few years, there has been a growing interest in the training of recurrent neural networks, either for associative memory tasks, or for tasks related to grammatical inference, time series prediction, process modeling and process control. A general framework for the trainin..
Adaptive Training Of Feedback Neural Networks For Non-Linear Filtering
. The paper proposes a general framework which encompasses the training of neural networks and the adaptation of filters. It is shown that neural networks can be considered as general non-linear filters which can be trained adaptively, i.e. which can undergo continual training. A unified view of gradient-based training algorithms for feedback networks is proposed, which gives rise to new algorithms. The use of some of these algorithms is illustrated by examples of non-linear adaptive filtering and process identification. INTRODUCTION In recent papers [1, 2], a general framework, which encompasses algorithms used for the training of neural networks and algorithms used for the adaptation of filters, has been proposed. Specifically, it was shown that neural networks can be used adaptively, i.e. can undergo continual training with a possibly infinite number of time-ordered examples - in contradistinction to the traditional training of neural networks with a finite number of examples pres..
Deep Multilayer Perceptron for Knowledge Extraction: Understanding the Gardon de Mialet Flash Floods Modeling
Issu de : ITISE 2019 - International Conference on Time Series and Forecasting, Granada, Spain, 25-27 September 2019International audienceFlash floods frequently hit Southern France and cause heavy damages and fatalities. To enhance persons and goods safety, official flood forecasting services in France need accurate information and efficient models to optimize their decisions and policy in crisis management. Their forecasting is a serious challenge as heavy rainfalls that cause such floods are very heterogeneous in time and space. Such phenomena are typically nonlinear and more complex than classical flood events. This analysis had led to consider complementary alternatives to enhance the management of such situations. For decades, artificial neural networks have been proved very efficient to model nonlinear phenomena, particularly rainfall-discharge relations in various types of basins. They are applied in this study with two main goals: first, modeling flash floods on the Gardon de Mialet basin (Southern France); second, extract internal information from the model by using the KnoX: knowledge extraction method to provide new ways to improve models. The first analysis shows that the kind of nonlinear predictor strongly influences the representation of information, e.g., the main influent variable (rainfall) is more important in the recurrent and static models than in the feed-forward one. For understanding “long-term” flash floods genesis, recurrent and static models appear thus as better candidates, despite their lower performance. Besides, the distribution of weights linking the exogenous variables to the first layer of neurons is consistent with the physical considerations about spatial distribution of rainfall and response time of the hydrological system