20,868 research outputs found
An elastic net orthogonal forward regression algorithm
In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches
Recommended from our members
Modeling of complex-valued Wiener systems using B-spline neural network
In this brief, a new complex-valued B-spline neural network is introduced in order to model the complex-valued Wiener system using observational input/output data. The complex-valued nonlinear static function in the Wiener system is represented using the tensor product from two univariate Bspline neural networks, using the real and imaginary parts of the system input. Following the use of a simple least squares parameter initialization scheme, the GaussāNewton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first-order derivatives recursion. Numerical examples, including a nonlinear high-power amplifier model in communication systems, are used to demonstrate the efficacy of the proposed approaches
Special Algorithm for Stability Analysis of Multistable Biological Regulatory Systems
We consider the problem of counting (stable) equilibriums of an important
family of algebraic differential equations modeling multistable biological
regulatory systems. The problem can be solved, in principle, using real
quantifier elimination algorithms, in particular real root classification
algorithms. However, it is well known that they can handle only very small
cases due to the enormous computing time requirements. In this paper, we
present a special algorithm which is much more efficient than the general
methods. Its efficiency comes from the exploitation of certain interesting
structures of the family of differential equations.Comment: 24 pages, 5 algorithms, 10 figure
Recommended from our members
Radial basis function classifier construction using particle swarm optimisation aided orthogonal forward regression
We develop a particle swarm optimisation (PSO)
aided orthogonal forward regression (OFR) approach for constructing radial basis function (RBF) classifiers with tunable nodes. At each stage of the OFR construction process, the centre vector and diagonal covariance matrix of one RBF node is determined efficiently by minimising the leave-one-out (LOO) misclassification rate (MR) using a PSO algorithm. Compared with the state-of-the-art regularisation assisted orthogonal least square algorithm based on the LOO MR for selecting fixednode RBF classifiers, the proposed PSO aided OFR algorithm for constructing tunable-node RBF classifiers offers significant advantages in terms of better generalisation performance and smaller model size as well as imposes lower computational complexity in classifier construction process. Moreover, the proposed algorithm does not have any hyperparameter that requires costly tuning based on cross validation
Recommended from our members
Sparse kernel density estimation technique based on zero-norm constraint
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance
- ā¦