1,364 research outputs found
NARX-based nonlinear system identification using orthogonal least squares basis hunting
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance
Recommended from our members
Elastic net prefiltering for two class classification
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model’s generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems
Modelling and inverting complex-valued Wiener systems
We develop a complex-valued (CV) B-spline neural network approach for efficient identification and inversion of CV Wiener systems. The CV nonlinear static function in the Wiener system is represented using the tensor product of two univariate B-spline neural networks. With the aid of a least squares parameter initialisation, the Gauss-Newton algorithm effectively estimates the model parameters that include the CV linear dynamic model coefficients and B-spline neural network weights. The identification algorithm naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. An accurate inverse of the CV Wiener system is then obtained, in which the inverse of the CV nonlinear static function of the Wiener system is calculated efficiently using the Gaussian-Newton algorithm based on the estimated B-spline neural network model, with the aid of the De Boor recursions. The effectiveness of our approach for identification and inversion of CV Wiener systems is demonstrated using the application of digital predistorter design for high power amplifiers with memor
Symmetric RBF classifier for nonlinear detection in multiple-antenna aided systems
In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called “overloaded” multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signal-to-noise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements. Index Terms—Classification, multiple-antenna system, orthogonal forward selection, radial basis function (RBF), symmetry
Magnetization states and switching in narrow-gapped ferromagnetic nanorings
We study permalloy nanorings that are lithographically fabricated with narrow
gaps that break the rotational symmetry of the ring while retaining the vortex
ground state, using both micromagnetic simulations and magnetic force
microscopy (MFM). The vortex chirality in these structures can be readily set
with an in-plane magnetic field and easily probed by MFM due to the field
associated with the gap, suggesting such rings for possible applications in
storage technologies. We find that the gapped ring edge characteristics (i.e.,
edge profile and gap shape) are critical in determining the magnetization
switching field, thus elucidating an essential parameter in the controls of
devices that might incorporate such structures
Recommended from our members
On-line Gaussian mixture density estimator for adaptive minimum bit-error-rate beamforming receivers
We develop an on-line Gaussian mixture density estimator (OGMDE) in the complex-valued domain to facilitate
adaptive minimum bit-error-rate (MBER) beamforming
receiver for multiple antenna based space-division multiple access systems. Specifically, the novel OGMDE is proposed to adaptively model the probability density function of the beamformer’s output by tracking the incoming data sample by sample. With the aid of the proposed OGMDE, our adaptive beamformer is capable of updating the beamformer’s weights sample by sample to directly minimize the achievable bit error rate (BER). We show that this OGMDE based MBER beamformer outperforms the existing on-line MBER beamformer, known as the least BER beamformer, in terms of both the convergence speed and the achievable BER
Pareto repeated weighted boosting search for multiple-objective optimisation
A guided stochastic search algorithm, known as the repeated weighted boosting search (RWBS), offers an effective means for solving the difficult single-objective optimisation problems with non-smooth and/or multi-modal cost functions. Compared with other global optimisation solvers, such as the genetic algorithms (GAs) and adaptive simulated annealing, RWBS is easier to implement, has fewer algorithmic parameters to tune and has been shown to provide similar levels of performance on many benchmark problems. This contribution develops a novel Pareto RWBS (PRWBS) algorithm for multiple objective optimisation applications. The performance of the proposed PRWBS algorithm is compared with the well-known non-dominated sorting GA (NSGA-II) for multiple objective optimisation on a range of benchmark problems, and the results obtained demonstrate that the proposed PRWBS algorithm offers a competitive performance whilst retaining the benefits of the original RWBS algorithm
Recommended from our members
Nonlinear identification using orthogonal forward regression with nested optimal regularization
An efficient data based-modeling algorithm for
nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization
parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization
parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification
examples are included to demonstrate the effectiveness of
this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm
- …