20 research outputs found

    Phase correction for Learning Feedforward Control

    Get PDF
    Intelligent mechatronics makes it possible to compensate for effects that are difficult to compensate for by construction or by linear control, by including some intelligence into the system. The compensation of state dependent effects, e.g. friction, cogging and mass deviation, can be realised by learning feedforward control. This method identifies these disturbing effects as function of their states and compensates for these, before they introduce an error. Because the effects are learnt as function of their states, this method can be used for non-repetitive motions. The learning of state dependent effects relies on the update signal that is used. In previous work, the feedback control signal was used as an error measure between the approximation and the true state dependent effect. If the effects introduce a signal that contains frequencies near the bandwidth, the phase shift between this signal and the feedback signal might seriously degenerate the performance of the approximation. The use of phase correction overcomes this problem. This is validated by a set of simulations and experiments that show the necessity of the phase corrected scheme

    On Using a Support Vector Machine in Learning Feed-Forward Control

    Get PDF
    For mechatronic motion systems, the performance increases significantly if, besides feedback control, also feed-forward control is used. This feed-forward part should contain the (stable part of the) inverse of the plant. This inverse is difficult to obtain if non-linear dynamics are present. To overcome this problem, learning feed-forward control can be applied. The properties of the learning mechanism are of importance in this setting. In the paper, a support vector machine is proposed as the learning mechanism. It is shown that this mechanism has several advantages over other learning techniques when applied to learning feed-forward control. The method is tested with simulation

    On-line nonparametric regression to learn state-dependent disturbances

    Get PDF
    A combination of recursive least squares and weighted least squares is made which can adapt its structure such that a relation between in- and output can he approximated, even when the structure of this relation is unknown beforehand.\ud This method can adapt its structure on-line while it preserves information offered by previous samples, making it applicable in a control setting. This method has been tested with compntergenerated data, and it b used in a simulation to learn the non-linear state-dependent effects, both with good success

    Pruning Error Minimization in Least Squares Support Vector Machines

    Get PDF
    The support vector machine (SVM) is a method for classification and for function approximation. This method commonly makes use of an /spl epsi/-insensitive cost function, meaning that errors smaller than /spl epsi/ remain unpunished. As an alternative, a least squares support vector machine (LSSVM) uses a quadratic cost function. When the LSSVM method is used for function approximation, a nonsparse solution is obtained. The sparseness is imposed by pruning, i.e., recursively solving the approximation problem and subsequently omitting data that has a small error in the previous pass. However, omitting data with a small approximation error in the previous pass does not reliably predict what the error will be after the sample has been omitted. In this paper, a procedure is introduced that selects from a data set the training sample that will introduce the smallest approximation error when it will be omitted. It is shown that this pruning scheme outperforms the standard one

    Allowed sloppy magnet placement due to learning control

    Get PDF

    Comparison of four support-vector based function approximators

    Get PDF
    One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been selected from this data. Several support-vector based function approximators are compared in this research. The comparison focuses on the following subjects: i) how many support vectors are involved in achieving a certain approximation accuracy, ii) how well are noisy training samples handled, and iii) how is ambiguous training data dealt with. The comparison shows that the so-called key sample machine (KSM) outperforms the other schemes, specifically on aspects i and ii. The distinctive features that explain this, are the quadratic cost function and using all the training data to train the limited parameters

    Support-vector-based least squares for learning non-linear dynamics

    Get PDF
    A function approximator is introduced that is based on least squares support vector machines (LSSVM) and on least squares (LS). The potential indicators for the LS method are chosen as the kernel functions of all the training samples similar to LSSVM. By selecting these as indicator functions the indicators for LS can be interpret in a support vector machine setting and the curse of dimensionality can be circumvented. The indicators are included by a forward selection scheme. This makes the computational load for the training phase small. As long as the function is not approximated good enough, and the function is not overfitting the data, a new indicator is included. To test the approximator the inverse nonlinear dynamics of a linear motor are learnt. This is done by including the approximator as learning mechanism in a learning feedforward controller
    corecore