2,904 research outputs found

    Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system

    Full text link
    A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to neural networks. This paper presents results from an investigation into using discrete and fuzzy dynamical system representations within the XCSF learning classifier system. In particular, asynchronous random Boolean networks are used to represent the traditional condition-action production system rules in the discrete case and asynchronous fuzzy logic networks in the continuous-valued case. It is shown possible to use self-adaptive, open-ended evolution to design an ensemble of such dynamical systems within XCSF to solve a number of well-known test problems

    Multistep predictions for adaptive sampling in mobile robotic sensor networks using proximal ADMM

    Get PDF
    This paper presents a novel approach, using multi-step predictions, to the adaptive sampling problem for efficient monitoring of environmental spatial phenomena in a mobile sensor network. We employ a Gaussian process to represent the spatial field of interest, which is then used to predict the field at unmeasured locations. The adaptive sampling problem aims to drive the mobile sensors to optimally navigate the environment while the sensors adaptively take measurements of the spatial phenomena at each sampling step. To this end, an optimal sampling criterion based on conditional entropy is proposed, which minimizes the prediction uncertainty of the Gaussian process model. By predicting the measurements the mobile sensors potentially take in a finite horizon of multiple future sampling steps and exploiting the chain rule of the conditional entropy, a multi-step-ahead adaptive sampling optimization problem is formulated. Its objective is to find the optimal sampling paths for the mobile sensors in multiple sampling steps ahead. Robot-robot and robot-obstacle collision avoidance is formulated as mixed-integer constraints. Compared with the single-step-ahead approach typically adopted in the literature, our approach provides better navigation, deployment, and data collection with more informative sensor readings. However, the resulting mixed-integer nonlinear program is highly complex and intractable. We propose to employ the proximal alternating direction method of multipliers to efficiently solve this problem. More importantly, the solution obtained by the proposed algorithm is theoretically guaranteed to converge to a stationary value. The effectiveness of our proposed approach was extensively validated by simulation using a real-world dataset, which showed highly promising results. © 2013 IEEE

    Time series prediction and forecasting using Deep learning Architectures

    Get PDF
    Nature brings time series data everyday and everywhere, for example, weather data, physiological signals and biomedical signals, financial and business recordings. Predicting the future observations of a collected sequence of historical observations is called time series forecasting. Forecasts are essential, considering the fact that they guide decisions in many areas of scientific, industrial and economic activity such as in meteorology, telecommunication, finance, sales and stock exchange rates. A massive amount of research has already been carried out by researchers over many years for the development of models to improve the time series forecasting accuracy. The major aim of time series modelling is to scrupulously examine the past observation of time series and to develop an appropriate model which elucidate the inherent behaviour and pattern existing in time series. The behaviour and pattern related to various time series may possess different conventions and infact requires specific countermeasures for modelling. Consequently, retaining the neural networks to predict a set of time series of mysterious domain remains particularly challenging. Time series forecasting remains an arduous problem despite the fact that there is substantial improvement in machine learning approaches. This usually happens due to some factors like, different time series may have different flattering behaviour. In real world time series data, the discriminative patterns residing in the time series are often distorted by random noise and affected by high-frequency perturbations. The major aim of this thesis is to contribute to the study and expansion of time series prediction and multistep ahead forecasting method based on deep learning algorithms. Time series forecasting using deep learning models is still in infancy as compared to other research areas for time series forecasting.Variety of time series data has been considered in this research. We explored several deep learning architectures on the sequential data, such as Deep Belief Networks (DBNs), Stacked AutoEncoders (SAEs), Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). Moreover, we also proposed two different new methods based on muli-step ahead forecasting for time series data. The comparison with state of the art methods is also exhibited. The research work conducted in this thesis makes theoretical, methodological and empirical contributions to time series prediction and multi-step ahead forecasting by using Deep Learning Architectures
    • …
    corecore