379 research outputs found

    Decision tree learning for intelligent mobile robot navigation

    Get PDF
    The replication of human intelligence, learning and reasoning by means of computer algorithms is termed Artificial Intelligence (Al) and the interaction of such algorithms with the physical world can be achieved using robotics. The work described in this thesis investigates the applications of concept learning (an approach which takes its inspiration from biological motivations and from survival instincts in particular) to robot control and path planning. The methodology of concept learning has been applied using learning decision trees (DTs) which induce domain knowledge from a finite set of training vectors which in turn describe systematically a physical entity and are used to train a robot to learn new concepts and to adapt its behaviour. To achieve behaviour learning, this work introduces the novel approach of hierarchical learning and knowledge decomposition to the frame of the reactive robot architecture. Following the analogy with survival instincts, the robot is first taught how to survive in very simple and homogeneous environments, namely a world without any disturbances or any kind of "hostility". Once this simple behaviour, named a primitive, has been established, the robot is trained to adapt new knowledge to cope with increasingly complex environments by adding further worlds to its existing knowledge. The repertoire of the robot behaviours in the form of symbolic knowledge is retained in a hierarchy of clustered decision trees (DTs) accommodating a number of primitives. To classify robot perceptions, control rules are synthesised using symbolic knowledge derived from searching the hierarchy of DTs. A second novel concept is introduced, namely that of multi-dimensional fuzzy associative memories (MDFAMs). These are clustered fuzzy decision trees (FDTs) which are trained locally and accommodate specific perceptual knowledge. Fuzzy logic is incorporated to deal with inherent noise in sensory data and to merge conflicting behaviours of the DTs. In this thesis, the feasibility of the developed techniques is illustrated in the robot applications, their benefits and drawbacks are discussed

    Data Mining in Smart Grids

    Get PDF
    Effective smart grid operation requires rapid decisions in a data-rich, but information-limited, environment. In this context, grid sensor data-streaming cannot provide the system operators with the necessary information to act on in the time frames necessary to minimize the impact of the disturbances. Even if there are fast models that can convert the data into information, the smart grid operator must deal with the challenge of not having a full understanding of the context of the information, and, therefore, the information content cannot be used with any high degree of confidence. To address this issue, data mining has been recognized as the most promising enabling technology for improving decision-making processes, providing the right information at the right moment to the right decision-maker. This Special Issue is focused on emerging methodologies for data mining in smart grids. In this area, it addresses many relevant topics, ranging from methods for uncertainty management, to advanced dispatching. This Special Issue not only focuses on methodological breakthroughs and roadmaps in implementing the methodology, but also presents the much-needed sharing of the best practices. Topics include, but are not limited to, the following: Fuzziness in smart grids computing Emerging techniques for renewable energy forecasting Robust and proactive solution of optimal smart grids operation Fuzzy-based smart grids monitoring and control frameworks Granular computing for uncertainty management in smart grids Self-organizing and decentralized paradigms for information processin

    Models for time series prediction based on neural networks. Case study : GLP sales prediction from ANCAP.

    Get PDF
    A time series is a sequence of real values that can be considered as observations of a certain system. In this work, we are interested in time series coming from dynamical systems. Such systems can be sometimes described by a set of equations that model the underlying mechanism from where the samples come. However, in several real systems, those equations are unknown, and the only information available is a set of temporal measures, that constitute a time series. On the other hand, by practical reasons it is usually required to have a prediction, v.g. to know the (approximated) value of the series in a future instant t. The goal of this thesis is to solve one of such real-world prediction problem: given historical data related with the lique ed bottled propane gas sales, predict the future gas sales, as accurately as possible. This time series prediction problem is addressed by means of neural networks, using both (dynamic) reconstruction and prediction. The problem of to dynamically reconstruct the original system consists in building a model that captures certain characteristics of it in order to have a correspondence between the long-term behavior of the model and of the system. The networks design process is basically guided by three ingredients. The dimensionality of the problem is explored by our rst ingredient, the Takens-Mañé's theorem. By means of this theorem, the optimal dimension of the (neural) network input can be investigated. Our second ingredient is a strong theorem: neural networks with a single hidden layer are universal approximators. As the third ingredient, we faced the search of the optimal size of the hidden layer by means of genetic algorithms, used to suggest the number of hidden neurons that maximizes a target tness function (related with prediction errors). These algorithms are also used to nd the most in uential networks inputs in some cases. The determination of the hidden layer size is a central (and hard) problem in the determination of the network topology. This thesis includes a state of the art of neural networks design for time series prediction, including related topics such as dynamical systems, universal approximators, gradient-descent searches and variations, as well as meta-heuristics. The survey of the related literature is intended to be extensive, for both printed material and electronic format, in order to have a landscape of the main aspects for the state of the art in time series prediction using neural networks. The material found was sometimes extremely redundant (as in the case of the back-propagation algorithm and its improvements) and scarce in others (memory structures or estimation of the signal subspace dimension in the stochastic case). The surveyed literature includes classical research works ([27], [50], [52]) as well as more recent ones ([79] , [16] or [82]), which pretends to be another contribution of this thesis. Special attention is given to the available software tools for neural networks design and time series processing. After a review of the available software packages, the most promising computational tools for both approaches are discussed. As a result, a whole framework based on mature software tools was set and used. In order to work with such dynamical systems, software intended speci cally for the analysis and processing of time series was employed, and then chaotic series were part of our focus. Since not all randomness is attributable to chaos, in order to characterize the dynamical system generating the time series, an exploration of chaotic-stochastic systems is required, as well as network models to predict a time series associated to one of them. Here we pretend to show how the knowledge of the domain, something extensively treated in the bibliography, can be someway sophisticated (such as the Lyapunov's spectrum for a series or the embedding dimension). In order to model the dynamical system generated by the time series we used the state-space model, so the time series prediction was translated in the prediction of the next system state. This state-space model, together with the delays method (delayed coordinates) have practical importance for the development of this work, speci cally, the design of the input layer in some networks (multi-layer perceptrons - MLPs) and other parameters (taps in the TFLNs). Additionally, the rest of the network components where determined in many cases through procedures traditionally used in neural networks : genetic algorithms. The criteria of model (network) selection are discussed and a trade-o between performance and network complexity is further explored, inspired in the Rissanen's minimum description length and its estimation given by the chosen software. Regarding the employed network models, the network topologies suggested from the literature as adequate for the prediction are used (TLFNs and recurrent networks) together with MLPs (a classic of arti cial neural networks) and networks committees. The e ectiveness of each method is con rmed for the proposed prediction problem. Network committees, where the predictions are a naive convex combination of predictions from individual networks, are also extensively used. The need of criteria to compare the behaviors of the model and of the real system, in the long run, for a dynamic stochastic systems, is presented and two alternatives are commented. The obtained results proof the existence of a solution to the problem of learning of the dependence Input ! Output . We also conjecture that the system is dynamic-stochastic but not chaotic, because we only have a realization of the random process corresponding to the sales. As a non-chaotic system, the mean of the predictions of the sales would improve as the available data increase, although the probability of a prediction with a big error is always non-null due to the randomness present. This solution is found in a constructive and exhaustive way. The exhaustiveness can be deduced from the next ve statements: the design of a neural network requires knowing the input and output dimension,the number of the hidden layers and of the neurons in each of them. the use of the Takens-Mañé's theorem allows to derive the dimension of the input data by theorems such as the Kolmogorov's and Cybenko's ones the use of multi-layer perceptrons with only one hidden layer is justi ed so several of such models were tested the number of neurons in the hidden layer is determined many times heuristically using genetic algorithms a neuron in the output gives the desired prediction As we said, two tasks are carried out: the development of a time series prediction model and the analysis of a feasible model for the dynamic reconstruction of the system. With the best predictive model, obtained by an ensemble of two networks, an acceptable average error was obtained when the week to be predicted is not adjacent to the training set (7.04% for the week 46/2011). We believe that these results are acceptable provided the quantity of information available, and represent an additional validation that neural networks are useful for time series prediction coming from dynamical systems, no matter whether they are stochastic or not. Finally, the results con rmed several already known facts (such as that adding noise to the inputs and outputs of the training values can improve the results; that recurrent networks trained with the back-propagation algorithm don't have the problem of vanishing gradients in short periods and that the use of committees - which can be seen as a very basic of distributed arti cial intelligence - allows to improve signi cantly the predictions).Una serie temporal es una secuencia de valores reales que pueden ser considerados como observaciones de un cierto sistema. En este trabajo, estamos interesados en series temporales provenientes de sistemas dinámicos. Tales sistemas pueden ser algunas veces descriptos por un conjunto de ecuaciones que modelan el mecanismo subyacente que genera las muestras. sin embargo, en muchos sistemas reales, esas ecuaciones son desconocidas, y la única información disponible es un conjunto de medidas en el tiempo, que constituyen la serie temporal. Por otra parte, por razones prácticas es generalmente requerida una predicción, es decir, conocer el valor (aproximado) de la serie en un instante futuro t. La meta de esta tesis es resolver un problema de predicción del mundo real: dados los datos históricos relacionados con las ventas de gas propano licuado, predecir las ventas futuras, tan aproximadamente como sea posible. Este problema de predicción de series temporales es abordado por medio de redes neuronales, tanto para la reconstrucción como para la predicción. El problema de reconstruir dinámicamente el sistema original consiste en construir un modelo que capture ciertas características de él de forma de tener una correspondencia entre el comportamiento a largo plazo del modelo y del sistema. El proceso de diseño de las redes es guiado básicamente por tres ingredientes. La dimensionalidad del problema es explorada por nuestro primer ingrediente, el teorema de Takens-Mañé. Por medio de este teorema, la dimensión óptima de la entrada de la red neuronal puede ser investigada. Nuestro segundo ingrediente es un teorema muy fuerte: las redes neuronales con una sola capa oculta son un aproximador universal. Como tercer ingrediente, encaramos la búsqueda del tamaño oculta de la capa oculta por medio de algoritmos genéticos, usados para sugerir el número de neuronas ocultas que maximizan una función objetivo (relacionada con los errores de predicción). Estos algoritmos se usan además para encontrar las entradas a la red que influyen más en la salida en algunos casos. La determinación del tamaño de la capa oculta es un problema central (y duro) en la determinación de la topología de la red. Esta tesis incluye un estado del arte del diseño de redes neuronales para la predicción de series temporales, incluyendo tópicos relacionados tales como sistemas dinámicos, aproximadores universales, búsquedas basadas en el gradiente y sus variaciones, así como meta-heurísticas. El relevamiento de la literatura relacionada busca ser extenso, para tanto el material impreso como para el que esta en formato electrónico, de forma de tener un panorama de los principales aspectos del estado del arte en la predicción de series temporales usando redes neuronales. El material hallado fue algunas veces extremadamente redundante (como en el caso del algoritmo de retropropagación y sus mejoras) y escaso en otros (estructuras de memoria o estimación de la dimensión del sub-espacio de señal en el caso estocástico). La literatura consultada incluye trabajos de investigación clásicos ( ([27], [50], [52])' así como de los más reciente ([79] , [16] or [82]). Se presta especial atención a las herramientas de software disponibles para el diseño de redes neuronales y el procesamiento de series temporales. Luego de una revisión de los paquetes de software disponibles, las herramientas más promisiorias para ambas tareas son discutidas. Como resultado, un entorno de trabajo completo basado en herramientas de software maduras fue definido y usado. Para trabajar con los mencionados sistemas dinámicos, software especializado en el análisis y proceso de las series temporales fue empleado, y entonces las series caóticas fueron estudiadas. Ya que no toda la aleatoriedad es atribuible al caos, para caracterizar al sistema dinámico que genera la serie temporal se requiere una exploración de los sistemas caóticos-estocásticos, así como de los modelos de red para predecir una serie temporal asociada a uno de ellos. Aquí se pretende mostrar cómo el conocimiento del dominio, algo extensamente tratado en la literatura, puede ser de alguna manera sofisticado (tal como el espectro de Lyapunov de la serie o la dimensión del sub-espacio de señal). Para modelar el sistema dinámico generado por la serie temporal se usa el modelo de espacio de estados, por lo que la predicción de la serie temporal es traducida en la predicción del siguiente estado del sistema. Este modelo de espacio de estados, junto con el método de los delays (coordenadas demoradas) tiene importancia práctica en el desarrollo de este trabajo, específicamente, en el diseño de la capa de entrada en algunas redes (los perceptrones multicapa) y otros parámetros (los taps de las redes TLFN). Adicionalmente, el resto de los componentes de la red con determinados en varios casos a través de procedimientos tradicionalmente usados en las redes neuronales: los algoritmos genéticos. Los criterios para la selección de modelo (red) son discutidos y un balance entre performance y complejidad de la red es explorado luego, inspirado en el minimum description length de Rissanen y su estimación dada por el software elegido. Con respecto a los modelos de red empleados, las topologóas de sugeridas en la literatura como adecuadas para la predicción son usadas (TLFNs y redes recurrentes) junto con perceptrones multicapa (un clásico de las redes neuronales) y comités de redes. La efectividad de cada método es confirmada por el problema de predicción propuesto. Los comités de redes, donde las predicciones son una combinación convexa de las predicciones dadas por las redes individuales, son también usados extensamente. La necesidad de criterios para comparar el comportamiento del modelo con el del sistema real, a largo plazo, para un sistema dinámico estocástico, es presentada y dos alternativas son comentadas. Los resultados obtenidos prueban la existencia de una solución al problema del aprendizaje de la dependencia Entrada - Salida . Conjeturamos además que el sistema generador de serie de las ventas es dinámico-estocástico pero no caótico, ya que sólo tenemos una realización del proceso aleatorio correspondiente a las ventas. Al ser un sistema no caótico, la media de las predicciones de las ventas debería mejorar a medida que los datos disponibles aumentan, aunque la probabilidad de una predicción con un gran error es siempre no nula debido a la aleatoriedad presente. Esta solución es encontrada en una forma constructiva y exhaustiva. La exhaustividad puede deducirse de las siguiente cinco afirmaciones : el diseño de una red neuronal requiere conocer la dimensión de la entrada y de la salida, el número de capas ocultas y las neuronas en cada una de ellas el uso del teorema de takens-Mañé permite derivar la dimensión de la entrada por teoremas tales como los de Kolmogorov y Cybenko el uso de perceptrones con solo una capa oculta es justificado, por lo que varios de tales modelos son probados el número de neuronas en la capa oculta es determinada varias veces heurísticamente a través de algoritmos genéticos una sola neurona de salida da la predicción deseada. Como se dijo, dos tareas son llevadas a cabo: el desarrollo de un modelo para la predicción de la serie temporal y el análisis de un modelo factible para la reconstrucción dinámica del sistema. Con el mejor modelo predictivo, obtenido por el comité de dos redes se logró obtener un error aceptable en la predicción de una semana no contigua al conjunto de entrenamiento (7.04% para la semana 46/2011). Creemos que este es un resultado aceptable dada la cantidad de información disponible y representa una validación adicional de que las redes neuronales son útiles para la predicción de series temporales provenientes de sistemas dinámicos, sin importar si son estocásticos o no. Finalmente, los resultados experimentales confirmaron algunos hechos ya conocidos (tales como que agregar ruido a los datos de entrada y de salida de los valores de entrenamiento puede mejorar los resultados: que las redes recurrentes entrenadas con el algoritmo de retropropagación no presentan el problema del gradiente evanescente en periodos cortos y que el uso de de comités - que puede ser visto como una forma muy básica de inteligencia artificial distribuida - permite mejorar significativamente las predicciones)

    Novel strategies for process control based on hybrid semi-parametric mathematical systems

    Get PDF
    Tese de doutoramento. Engenharia Química. Universidade do Porto. Faculdade de Engenharia. 201

    Fuzzy Operator Trees for Modeling Utility Functions

    Get PDF
    In this thesis, we propose a method for modeling utility (rating) functions based on a novel concept called textbf{Fuzzy Operator Tree} (FOT for short). As the notion suggests, this method makes use of techniques from fuzzy set theory and implements a fuzzy rating function, that is, a utility function that maps to the unit interval, where 00 corresponds to the lowest and 11 to the highest evaluation. Even though the original motivation comes from quality control, FOTs are completely general and widely applicable. Our approach allows a human expert to specify a model in the form of an FOT in a quite convenient and intuitive way. To this end, he simply has to split evaluation criteria into sub-criteria in a recursive manner, and to determine in which way these sub-criteria ought to be combined: conjunctively, disjunctively, or by means of an averaging operator. The result of this process is the qualitative structure of the model. A second step, then, it is to parameterize the model. To support or even free the expert form this step, we develop a method for calibrating the model on the basis of exemplary ratings, that is, in a purely data-driven way. This method, which makes use of optimization techniques from the field of evolutionary algorithms, constitutes the second major contribution of the thesis. The third contribution of the thesis is a method for evaluating an FOT in a cost-efficient way. Roughly speaking, an FOT can be seen as an aggregation function that combines the evaluations of a number of basic criteria into an overall rating of an object. Essentially, the cost of computing this rating is hence given by sum of the evaluation costs of the basic criteria. In practice, however, the precise utility degree is often not needed. Instead, it is enough to know whether it lies above or below an important threshold value. In such cases, the evaluation process, understood as a sequential evaluation of basic criteria, can be stopped as soon as this question can be answered in a unique way. Of course, the (expected) number of basic criteria and, therefore, the (expected) evaluation cost will then strongly depend on the order of the evaluations, and this is what is optimized by the methods that we have developed

    Machine Learning in Tribology

    Get PDF
    Tribology has been and continues to be one of the most relevant fields, being present in almost all aspects of our lives. The understanding of tribology provides us with solutions for future technical challenges. At the root of all advances made so far are multitudes of precise experiments and an increasing number of advanced computer simulations across different scales and multiple physical disciplines. Based upon this sound and data-rich foundation, advanced data handling, analysis and learning methods can be developed and employed to expand existing knowledge. Therefore, modern machine learning (ML) or artificial intelligence (AI) methods provide opportunities to explore the complex processes in tribological systems and to classify or quantify their behavior in an efficient or even real-time way. Thus, their potential also goes beyond purely academic aspects into actual industrial applications. To help pave the way, this article collection aimed to present the latest research on ML or AI approaches for solving tribology-related issues generating true added value beyond just buzzwords. In this sense, this Special Issue can support researchers in identifying initial selections and best practice solutions for ML in tribology

    Design of neuro-fuzzy models by evolutionary and gradient-based algorithms

    Get PDF
    All systems found in nature exhibit, with different degrees, a nonlinear behavior. To emulate this behavior, classical systems identification techniques use, typically, linear models, for mathematical simplicity. Models inspired by biological principles (artificial neural networks) and linguistically motivated (fuzzy systems), due to their universal approximation property, are becoming alternatives to classical mathematical models. In systems identification, the design of this type of models is an iterative process, requiring, among other steps, the need to identify the model structure, as well as the estimation of the model parameters. This thesis addresses the applicability of gradient-basis algorithms for the parameter estimation phase, and the use of evolutionary algorithms for model structure selection, for the design of neuro-fuzzy systems, i.e., models that offer the transparency property found in fuzzy systems, but use, for their design, algorithms introduced in the context of neural networks. A new methodology, based on the minimization of the integral of the error, and exploiting the parameter separability property typically found in neuro-fuzzy systems, is proposed for parameter estimation. A recent evolutionary technique (bacterial algorithms), based on the natural phenomenon of microbial evolution, is combined with genetic programming, and the resulting algorithm, bacterial programming, advocated for structure determination. Different versions of this evolutionary technique are combined with gradient-based algorithms, solving problems found in fuzzy and neuro-fuzzy design, namely incorporation of a-priori knowledge, gradient algorithms initialization and model complexity reduction.Todos os sistemas encontrados na natureza exibem, com maior ou menor grau, um comportamento linear. De modo a emular esse comportamento, as técnicas de identificação clássicas usam, tipicamente e por simplicidade matemática, modelos lineares. Devido à sua propriedade de aproximação universal, modelos inspirados por princípios biológicos (redes neuronais artificiais) e motivados linguisticamente (sistemas difusos) tem sido cada vez mais usados como alternativos aos modelos matemáticos clássicos. Num contexto de identificação de sistemas, o projeto de modelos como os acima descritos é um processo iterativo, constituído por vários passos. Dentro destes, encontra-se a necessidade de identificar a estrutura do modelo a usar, e a estimação dos seus parâmetros. Esta Tese discutirá a aplicação de algoritmos baseados em derivadas para a fase de estimação de parâmetros, e o uso de algoritmos baseados na teoria da evolução de espécies, algoritmos evolutivos, para a seleção de estrutura do modelo. Isto será realizado no contexto do projeto de modelos neuro-difusos, isto é, modelos que simultaneamente exibem a propriedade de transparência normalmente associada a sistemas difusos mas que utilizam, para o seu projeto algoritmos introduzidos no contexto de redes neuronais. Os modelos utilizados neste trabalho são redes B-Spline, de Função de Base Radial, e sistemas difusos dos tipos Mamdani e Takagi-Sugeno. Neste trabalho começa-se por explorar, para desenho de redes B-Spline, a introdução de conhecimento à-priori existente sobre um processo. Neste sentido, aplica-se uma nova abordagem na qual a técnica para a estimação dos parâmetros é alterada a fim de assegurar restrições de igualdade da função e das suas derivadas. Mostra-se ainda que estratégias de determinação de estrutura do modelo, baseadas em computação evolutiva ou em heurísticas determinísticas podem ser facilmente adaptadas a este tipo de modelos restringidos. É proposta uma nova técnica evolutiva, resultante da combinação de algoritmos recentemente introduzidos (algoritmos bacterianos, baseados no fenómeno natural de evolução microbiana) e programação genética. Nesta nova abordagem, designada por programação bacteriana, os operadores genéticos são substituídos pelos operadores bacterianos. Deste modo, enquanto a mutação bacteriana trabalha num indivíduo, e tenta otimizar a bactéria que o codifica, a transferência de gene é aplicada a toda a população de bactérias, evitando-se soluções de mínimos locais. Esta heurística foi aplicada para o desenho de redes B-Spline. O desempenho desta abordagem é ilustrada e comparada com alternativas existentes. Para a determinação dos parâmetros de um modelo são normalmente usadas técnicas de otimização locais, baseadas em derivadas. Como o modelo em questão é não-linear, o desempenho deste género de técnicas é influenciado pelos pontos de partida. Para resolver este problema, é proposto um novo método no qual é usado o algoritmo evolutivo referido anteriormente para determinar pontos de partida mais apropriados para o algoritmo baseado em derivadas. Deste modo, é aumentada a possibilidade de se encontrar um mínimo global. A complexidade dos modelos neuro-difusos (e difusos) aumenta exponencialmente com a dimensão do problema. De modo a minorar este problema, é proposta uma nova abordagem de particionamento do espaço de entrada, que é uma extensão das estratégias de decomposição de entrada normalmente usadas para este tipo de modelos. Simulações mostram que, usando esta abordagem, se pode manter a capacidade de generalização com modelos de menor complexidade. Os modelos B-Spline são funcionalmente equivalentes a modelos difusos, desde que certas condições sejam satisfeitas. Para os casos em que tal não acontece (modelos difusos Mamdani genéricos), procedeu-se à adaptação das técnicas anteriormente empregues para as redes B-Spline. Por um lado, o algoritmo Levenberg-Marquardt é adaptado e a fim de poder ser aplicado ao particionamento do espaço de entrada de sistema difuso. Por outro lado, os algoritmos evolutivos de base bacteriana são adaptados para sistemas difusos, e combinados com o algoritmo de Levenberg-Marquardt, onde se explora a fusão das características de cada metodologia. Esta hibridização dos dois algoritmos, denominada de algoritmo bacteriano memético, demonstrou, em vários problemas de teste, apresentar melhores resultados que alternativas conhecidas. Os parâmetros dos modelos neuronais utilizados e dos difusos acima descritos (satisfazendo no entanto alguns critérios) podem ser separados, de acordo com a sua influência na saída, em parâmetros lineares e não-lineares. Utilizando as consequências desta propriedade nos algoritmos de estimação de parâmetros, esta Tese propõe também uma nova metodologia para estimação de parâmetros, baseada na minimização do integral do erro, em alternativa à normalmente utilizada minimização da soma do quadrado dos erros. Esta técnica, além de possibilitar (em certos casos) um projeto totalmente analítico, obtém melhores resultados de generalização, dado usar uma superfície de desempenho mais similar aquela que se obteria se se utilizasse a função geradora dos dados
    corecore