3 research outputs found

    Boosting wavelet neural networks using evolutionary algorithms for short-term wind speed time series forecasting

    Get PDF
    This paper addresses nonlinear time series modelling and prediction problem using a type of wavelet neural networks. The basic building block of the neural network models is a ridge type function. The training of such a network is a nonlinear optimization problem. Evolutionary algorithms (EAs), including genetic algorithm (GA) and particle swarm optimization (PSO), together with a new gradient-free algorithm (called coordinate dictionary search optimization – CDSO), are used to train network models. An example for real speed wind data modelling and prediction is provided to show the performance of the proposed networks trained by these three optimization algorithms

    Structured Dimensionality Reduction for Additive Model Regression

    Get PDF
    Additive models are regression methods which model the response variable as the sum of univariate transfer functions of the input variables. Key benefits of additive models are their accuracy and interpretability on many real-world tasks. Additive models are however not adapted to problems involving a large number (e.g., hundreds) of input variables, as they are prone to overfitting in addition to losing interpretability. In this paper, we introduce a novel framework for applying additive models to a large number of input variables. The key idea is to reduce the task dimensionality by deriving a small number of new covariates obtained by linear combinations of the inputs, where the linear weights are estimated with regard to the regression problem at hand. The weights are moreover constrained to prevent overfitting and facilitate the interpretation of the derived covariates. We establish identifiability of the proposed model under mild assumptions and present an efficient approximate learning algorithm. Experiments on synthetic and real-world data demonstrate that our approach compares favorably to baseline methods in terms of accuracy, while resulting in models of lower complexity and yielding practical insights into high-dimensional real-world regression tasks. Our framework broadens the applicability of additive models to high-dimensional problems while maintaining their interpretability and potential to provide practical insights

    L1 regularized projection pursuit for additive model learning

    No full text
    In this paper, we present a L1 regularized projection pursuit algorithm for additive model learning. Two new algorithms are developed for regression and classification respectively: sparse projection pursuit regression and sparse Jensen-Shannon Boosting. The introduced L1 regularized projection pursuit encourages sparse solutions, thus our new algorithms are robust to overfitting and present better generalization ability especially in settings with many irrelevant input features and noisy data. To make the optimization with L1 regularization more efficient, we develop an ”informative feature first ” sequential optimization algorithm. Extensive experiments demonstrate the effectiveness of our proposed approach. 1
    corecore