20,336 research outputs found

    A Sparse Learning Machine for Real-Time SOC Estimation of Li-ion Batteries

    Get PDF
    The state of charge (SOC) estimation of Li-ion batteries has attracted substantial interests in recent years. Kalman Filter has been widely used in real-time battery SOC estimation, however, to build a suitable dynamic battery state-space model is a key challenge, and most existing methods still use the off-line modelling approach. This paper tackles the challenge by proposing a novel sparse learning machine for real-time SOC estimation. This is achieved first by developing a new learning machine based on the traditional least squares support vector machine (LS-SVM) to capture the process dynamics of Li-ion batteries in real-time. The least squares support vector machine is the least squares version of the conventional support vector machines (SVMs) which suffers from low model sparseness. The proposed learning machine reduces the dimension of the projected high dimensional feature space with no loss of input information, leading to improved model sparsity and accuracy. To accelerate computation, mapping functions in the high feature space are selected using a fast recursive method. To further improve the model accuracy, a weighted regularization scheme and the differential evolution (DE) method are used to optimize the parameters. Then, an unscented Kalman filter (UKF) is used for real-time SOC estimation based on the proposed sparse learning machine model. Experimental results on the Federal Urban Drive Schedule (FUDS) test data reveal that the performance of the proposed algorithm is significantly enhanced, where the maximum absolute error is only one sixth of that obtained by the conventional LS-SVMs and the mean square error of the SOC estimations reaches to 10 −7 , while the proposed method is executed nearly 10 times faster than the conventional LS-SVMs

    Sparse multinomial kernel discriminant analysis (sMKDA)

    No full text
    Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems. Classification can be performed directly via minimum Mahalanobis distance in the canonical variates. sMKDA achieves state-of-the-art performance in terms of accuracy and sparseness on 11 benchmark datasets

    Least squares support vector machine with self-organizing multiple kernel learning and sparsity

    Get PDF
    © 2018 In recent years, least squares support vector machines (LSSVMs) with various kernel functions have been widely used in the field of machine learning. However, the selection of kernel functions is often ignored in practice. In this paper, an improved LSSVM method based on self-organizing multiple kernel learning is proposed for black-box problems. To strengthen the generalization ability of the LSSVM, some appropriate kernel functions are selected and the corresponding model parameters are optimized using a differential evolution algorithm based on an improved mutation strategy. Due to the large computation cost, a sparse selection strategy is developed to extract useful data and remove redundant data without loss of accuracy. To demonstrate the effectiveness of the proposed method, some benchmark problems from the UCI machine learning repository are tested. The results show that the proposed method performs better than other state-of-the-art methods. In addition, to verify the practicability of the proposed method, it is applied to a real-world converter steelmaking process. The results illustrate that the proposed model can precisely predict the molten steel quality and satisfy the actual production demand

    Symmetric RBF classifier for nonlinear detection in multiple-antenna aided systems

    No full text
    In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called “overloaded” multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signal-to-noise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements. Index Terms—Classification, multiple-antenna system, orthogonal forward selection, radial basis function (RBF), symmetry

    Linear Time Feature Selection for Regularized Least-Squares

    Full text link
    We propose a novel algorithm for greedy forward feature selection for regularized least-squares (RLS) regression and classification, also known as the least-squares support vector machine or ridge regression. The algorithm, which we call greedy RLS, starts from the empty feature set, and on each iteration adds the feature whose addition provides the best leave-one-out cross-validation performance. Our method is considerably faster than the previously proposed ones, since its time complexity is linear in the number of training examples, the number of features in the original data set, and the desired size of the set of selected features. Therefore, as a side effect we obtain a new training algorithm for learning sparse linear RLS predictors which can be used for large scale learning. This speed is possible due to matrix calculus based short-cuts for leave-one-out and feature addition. We experimentally demonstrate the scalability of our algorithm and its ability to find good quality feature sets.Comment: 17 pages, 15 figure
    • 

    corecore