3,271 research outputs found

    NARX-based nonlinear system identification using orthogonal least squares basis hunting

    No full text
    An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance

    Pareto repeated weighted boosting search for multiple-objective optimisation

    No full text
    A guided stochastic search algorithm, known as the repeated weighted boosting search (RWBS), offers an effective means for solving the difficult single-objective optimisation problems with non-smooth and/or multi-modal cost functions. Compared with other global optimisation solvers, such as the genetic algorithms (GAs) and adaptive simulated annealing, RWBS is easier to implement, has fewer algorithmic parameters to tune and has been shown to provide similar levels of performance on many benchmark problems. This contribution develops a novel Pareto RWBS (PRWBS) algorithm for multiple objective optimisation applications. The performance of the proposed PRWBS algorithm is compared with the well-known non-dominated sorting GA (NSGA-II) for multiple objective optimisation on a range of benchmark problems, and the results obtained demonstrate that the proposed PRWBS algorithm offers a competitive performance whilst retaining the benefits of the original RWBS algorithm

    Learning from distributed data sources using random vector functional-link networks

    Get PDF
    One of the main characteristics in many real-world big data scenarios is their distributed nature. In a machine learning context, distributed data, together with the requirements of preserving privacy and scaling up to large networks, brings the challenge of designing fully decentralized training protocols. In this paper, we explore the problem of distributed learning when the features of every pattern are available throughout multiple agents (as is happening, for example, in a distributed database scenario). We propose an algorithm for a particular class of neural networks, known as Random Vector Functional-Link (RVFL), which is based on the Alternating Direction Method of Multipliers optimization algorithm. The proposed algorithm allows to learn an RVFL network from multiple distributed data sources, while restricting communication to the unique operation of computing a distributed average. Our experimental simulations show that the algorithm is able to achieve a generalization accuracy comparable to a fully centralized solution, while at the same time being extremely efficient
    corecore