47,201 research outputs found
Research on an Improved Method for Permanent Magnet Synchronous Motor
In permanent magnet synchronous motor (PMSM) traditional vector control system, PI regulator is used in the speed loop, but it has some defects. An improved method of PMSM vector control is proposed in the paper. The active-disturbance rejection control (ADRC) speed regulator is designed with the input signals of given speed and real speed and the output of given stator current q coordinate component. Then, in order to optimize ADRC controller, the least squares support vector machines (LSSVM) optimal regression model is derived and successfully embedded in the ADRC controller. ADRC observation precision and dynamic response of the system are improved. The load disturbance effect on the system is reduced to a large extent. The system anti-interference ability is further improved. Finally, the current sensor CSNE151-100 is selected to sample PMSM stator currents. The voltage sensor JLBV1 is used to sample the stator voltage. The rotor speed of PMSM is measured by mechanical speed sensor, the type of which is BENTLY 330500. Experimental platform is constructed to verify the effectiveness of the proposed method
Sparse multinomial kernel discriminant analysis (sMKDA)
Dimensionality reduction via canonical variate analysis (CVA) is important for pattern recognition and has been extended variously to permit more flexibility, e.g. by "kernelizing" the formulation. This can lead to over-fitting, usually ameliorated by regularization. Here, a method for sparse, multinomial kernel discriminant analysis (sMKDA) is proposed, using a sparse basis to control complexity. It is based on the connection between CVA and least-squares, and uses forward selection via orthogonal least-squares to approximate a basis, generalizing a similar approach for binomial problems. Classification can be performed directly via minimum Mahalanobis distance in the canonical variates. sMKDA achieves state-of-the-art performance in terms of accuracy and sparseness on 11 benchmark datasets
Symmetric RBF classifier for nonlinear detection in multiple-antenna aided systems
In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called âoverloadedâ multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signal-to-noise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements. Index TermsâClassification, multiple-antenna system, orthogonal forward selection, radial basis function (RBF), symmetry
Randomized Sketches of Convex Programs with Sharp Guarantees
Random projection (RP) is a classical technique for reducing storage and
computational costs. We analyze RP-based approximations of convex programs, in
which the original optimization problem is approximated by the solution of a
lower-dimensional problem. Such dimensionality reduction is essential in
computation-limited settings, since the complexity of general convex
programming can be quite high (e.g., cubic for quadratic programs, and
substantially higher for semidefinite programs). In addition to computational
savings, random projection is also useful for reducing memory usage, and has
useful properties for privacy-sensitive optimization. We prove that the
approximation ratio of this procedure can be bounded in terms of the geometry
of constraint set. For a broad class of random projections, including those
based on various sub-Gaussian distributions as well as randomized Hadamard and
Fourier transforms, the data matrix defining the cost function can be projected
down to the statistical dimension of the tangent cone of the constraints at the
original solution, which is often substantially smaller than the original
dimension. We illustrate consequences of our theory for various cases,
including unconstrained and -constrained least squares, support vector
machines, low-rank matrix estimation, and discuss implications on
privacy-sensitive optimization and some connections with de-noising and
compressed sensing
An Exponential Lower Bound on the Complexity of Regularization Paths
For a variety of regularized optimization problems in machine learning,
algorithms computing the entire solution path have been developed recently.
Most of these methods are quadratic programs that are parameterized by a single
parameter, as for example the Support Vector Machine (SVM). Solution path
algorithms do not only compute the solution for one particular value of the
regularization parameter but the entire path of solutions, making the selection
of an optimal parameter much easier.
It has been assumed that these piecewise linear solution paths have only
linear complexity, i.e. linearly many bends. We prove that for the support
vector machine this complexity can be exponential in the number of training
points in the worst case. More strongly, we construct a single instance of n
input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) =
\Theta(2^d) many distinct subsets of support vectors occur as the
regularization parameter changes.Comment: Journal version, 28 Pages, 5 Figure
A Survey on Potential of the Support Vector Machines in Solving Classification and Regression Problems
Kernel methods and support vector machines have become the most popular learning from examples paradigms. Several areas of application research make use of SVM approaches as for instance hand written character recognition, text categorization, face detection, pharmaceutical data analysis and drug design. Also, adapted SVMâs have been proposed for time series forecasting and in computational neuroscience as a tool for detection of symmetry when eye movement is connected with attention and visual perception. The aim of the paper is to investigate the potential of SVMâs in solving classification and regression tasks as well as to analyze the computational complexity corresponding to different methodologies aiming to solve a series of afferent arising sub-problems.Support Vector Machines, Kernel-Based Methods, Supervised Learning, Regression, Classification
- âŠ