17,169 research outputs found

    Stabilising Model Predictive Control for Discrete-time Fractional-order Systems

    Full text link
    In this paper we propose a model predictive control scheme for constrained fractional-order discrete-time systems. We prove that all constraints are satisfied at all time instants and we prescribe conditions for the origin to be an asymptotically stable equilibrium point of the controlled system. We employ a finite-dimensional approximation of the original infinite-dimensional dynamics for which the approximation error can become arbitrarily small. We use the approximate dynamics to design a tube-based model predictive controller which steers the system state to a neighbourhood of the origin of controlled size. We finally derive stability conditions for the MPC-controlled system which are computationally tractable and account for the infinite dimensional nature of the fractional-order system and the state and input constraints. The proposed control methodology guarantees asymptotic stability of the discrete-time fractional order system, satisfaction of the prescribed constraints and recursive feasibility

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    From Nonlinear Identification to Linear Parameter Varying Models: Benchmark Examples

    Full text link
    Linear parameter-varying (LPV) models form a powerful model class to analyze and control a (nonlinear) system of interest. Identifying a LPV model of a nonlinear system can be challenging due to the difficulty of selecting the scheduling variable(s) a priori, which is quite challenging in case a first principles based understanding of the system is unavailable. This paper presents a systematic LPV embedding approach starting from nonlinear fractional representation models. A nonlinear system is identified first using a nonlinear block-oriented linear fractional representation (LFR) model. This nonlinear LFR model class is embedded into the LPV model class by factorization of the static nonlinear block present in the model. As a result of the factorization a LPV-LFR or a LPV state-space model with an affine dependency results. This approach facilitates the selection of the scheduling variable from a data-driven perspective. Furthermore the estimation is not affected by measurement noise on the scheduling variables, which is often left untreated by LPV model identification methods. The proposed approach is illustrated on two well-established nonlinear modeling benchmark examples

    Fractionally Predictive Spiking Neurons

    Full text link
    Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing 201
    • …
    corecore