5 research outputs found

    Genetic programming-based regression for temporal data

    Get PDF
    Various machine learning techniques exist to perform regression on temporal data with concept drift occurring. However, there are numerous nonstationary environments where these techniques may fail to either track or detect the changes. This study develops a genetic programming-based predictive model for temporal data with a numerical target that tracks changes in a dataset due to concept drift. When an environmental change is evident, the proposed algorithm reacts to the change by clustering the data and then inducing nonlinear models that describe generated clusters. Nonlinear models become terminal nodes of genetic programming model trees. Experiments were carried out using seven nonstationary datasets and the obtained results suggest that the proposed model yields high adaptation rates and accuracy to several types of concept drifts. Future work will consider strengthening the adaptation to concept drift and the fast implementation of genetic programming on GPUs to provide fast learning for high-speed temporal data.http://link.springer.com/journal/107102022-05-09hj2021Computer Scienc

    Genetic Programming Approach for Nonstationary Data Analytics

    No full text
    Nonstationary data with concept drift occurring is usually made up of different underlying data generating processes. Therefore, if the knowledge of the existence of different segments in the dataset is not taken into consideration, then the induced predictive model is distorted by the past existing patterns. Thus, the challenge posed to a regressor is to select an appropriate segment that depicts the current underlying data generating process to be used in a model induction. The proposed genetic programming approach for nonstationary data analytics (GPANDA) provides a piecewise nonlinear regression model for nonstationary data. The GPANDA consists of three components: dynamic differential evolution-based clustering algorithm to split the parameter space into subspaces that resemble different data generating processes present in the dataset; the dynamic particle swarm optimization-based model induction technique to induce nonlinear models that describe each generated cluster; and dynamic genetic programming that evolves model trees that define the boundaries of nonlinear models which are expressed as terminal nodes. If an environmental change is detected in a nonstationary dataset, a dynamic differential evolution-based clustering algorithm clusters the data. For the clusters that change, the dynamic particle swarm optimization-based model induction approach adapts nonlinear models or induces new models to create an updated genetic programming terminal set and then, purple the genetic programming evolves a piecewise predictive model to fit the dataset. To evaluate the effectiveness of GPANDA, experimental evaluations were conducted on both artificial and real-world datasets. Two stock market datasets, GDP and CPI were selected to benchmark the performance of the proposed model to the leading studies. GPANDA outperformed the genetic programming algorithms designed for dynamic environments and was competitive to the state-of-art-techniques.Thesis (PhD)--University of Pretoria, 2020.UP Postgraduate Research BursaryComputer SciencePhDUnrestricte

    A comparative study of nonlinear regression and autoregressive techniques in hybrid with particle swarm optimization for time-series forecasting

    No full text
    Usually, real-world time-series forecasting problems are dynamic. If such time-series are characterized by mere concept shifts, a passive approach to learning become ideal to continuously adapt the model parameters whenever new data patterns arrive to cope with uncertainty in the presence of change. This work hybridizes a quantum-inspired particle swarm optimization designed for dynamic environments, to cope with concept shifts, with either a least-squares approximation technique or nonlinear autoregressive model to forecast time-series. Also, this work evaluates experimentally and performs a comparative study on the performance of the proposed models. The obtained results show that the nonlinear autoregressive-based model outperformed the least-squares approximation-based model and the separate models that were implemented in the hybridization and also, several state-of-the-art models for the given datasets.https://www.elsevier.com/locate/eswa2023-11-09hj2021Computer Scienc

    Particle swarm optimization-based empirical mode decomposition predictive technique for nonstationary data

    No full text
    Real-world nonstationary data are usually characterized by high nonlinearity and complex patterns due to the effects of different exogenous factors that make prediction a very challenging task. An ensemble strategically combines multiple techniques and tends to be robust and more precise compared to a single intelligent algorithmic model. In this work, a dynamic particle swarm optimization-based empirical mode decomposition ensemble is proposed for nonstationary data prediction. The proposed ensemble implements an environmental change detection technique to capture concept drift occurring and the intrinsic nonlinearity in time series, hence improving prediction accuracy. The proposed ensemble technique was experimentally evaluated on electric time series datasets. The obtained results show that the proposed technique improves prediction accuracy and it outperformed several state-of-the-art techniques in several cases. For future work direction, a detailed empirical analysis of the proposed technique can be considered such as the effect of the cost of prediction errors, and the technique's search capability.DATA AVAILABILITY STATEMENT : The datasets analyzed during the current study are publicly available in the Australian energy market operator repository, http://www.aemo.com.au/ and Australian bureau of meteorology repository, http://www.bom.gov.au/.https://link.springer.com/journal/112272023-06-28hj2023Computer Scienc

    A multi-population particle swarm optimization-based time series predictive technique

    No full text
    DATA AVAILABILITY : Data will be made available on request.In several businesses, forecasting is needed to predict expenses, future revenue, and profit margin. As such, accurate forecasting is pivotal to the success of those businesses. Due to the effects of different exogenous factors, such as economic fluctuations, and weather conditions, time series is susceptible to nonlinearity and complexity, making accurate predictions difficult. This work proposes a machine-learning-based time series forecasting technique to improve the precision and computation performance of an induced time series forecasting model. The proposed technique, a multi-population particle swarm optimization-based nonlinear time series predictive model, decomposes a predictive task into three sub-tasks: observation window optimization, predictive model induction task, and forecasting horizon prediction. Each sub-task is optimized by a particle swarm optimization sub-swarm in which the sub-swarms are executed in parallel. The proposed technique was experimentally evaluated on fifteen Electric load time series using root mean square error, mean absolute percentage error, and computation time as performance measures. The results obtained show that the proposed technique was effective to induce a forecasting model of improved predictive and computation performance to outperform the benchmark techniques on all datasets. Also, the proposed algorithm was competitive with state-of-the-art techniques. The future direction of this work will consider an empirical analysis of the search and solution spaces of the proposed technique and perform a fitness landscape analysis.https://www.elsevier.com/locate/eswa2025-07-15hj2024Computer ScienceSDG-09: Industry, innovation and infrastructur
    corecore