112,087 research outputs found

    Robust online adaptive neural network control for the regulation of treadmill exercises

    Full text link
    The paper proposes a robust online adaptive neural network control scheme for an automated treadmill system. The proposed control scheme is based on Feedback-Error Learning Approach (FELA), by using which the plant Jacobian calculation problem is avoided. Modification of the learning algorithm is proposed to solve the overtraining issue, guaranteeing to system stability and system convergence. As an adaptive neural network controller can adapt itself to deal with system uncertainties and external disturbances, this scheme is very suitable for treadmill exercise regulation when the model of the exerciser is unknown or inaccurate. In this study, exercise intensity (measured by heart rate) is regulated by simultaneously manipulating both treadmill speed and gradient in order to achieve fast tracking for which a single input multi output (SIMO) adaptive neural network controller has been designed. Real-time experiment result confirms that robust performance for nonlinear multivariable system under model uncertainties and unknown external disturbances can indeed be achieved. © 2011 IEEE

    Learning Lipschitz Feedback Policies from Expert Demonstrations: Closed-Loop Guarantees, Generalization and Robustness

    Full text link
    In this work, we propose a framework to learn feedback control policies with guarantees on closed-loop generalization and adversarial robustness. These policies are learned directly from expert demonstrations, contained in a dataset of state-control input pairs, without any prior knowledge of the task and system model. We use a Lipschitz-constrained loss minimization scheme to learn feedback policies with certified closed-loop robustness, wherein the Lipschitz constraint serves as a mechanism to tune the generalization performance and robustness to adversarial disturbances. Our analysis exploits the Lipschitz property to obtain closed-loop guarantees on generalization and robustness of the learned policies. In particular, we derive a finite sample bound on the policy learning error and establish robust closed-loop stability under the learned control policy. We also derive bounds on the closed-loop regret with respect to the expert policy and the deterioration of closed-loop performance under bounded (adversarial) disturbances to the state measurements. Numerical results validate our analysis and demonstrate the effectiveness of our robust feedback policy learning framework. Finally, our results suggest the existence of a potential tradeoff between nominal closed-loop performance and adversarial robustness, and that improvements in nominal closed-loop performance can only be made at the expense of robustness to adversarial perturbations.Comment: Submitted to the IEEE Open Journal of Control System

    Phase correction for Learning Feedforward Control

    Get PDF
    Intelligent mechatronics makes it possible to compensate for effects that are difficult to compensate for by construction or by linear control, by including some intelligence into the system. The compensation of state dependent effects, e.g. friction, cogging and mass deviation, can be realised by learning feedforward control. This method identifies these disturbing effects as function of their states and compensates for these, before they introduce an error. Because the effects are learnt as function of their states, this method can be used for non-repetitive motions. The learning of state dependent effects relies on the update signal that is used. In previous work, the feedback control signal was used as an error measure between the approximation and the true state dependent effect. If the effects introduce a signal that contains frequencies near the bandwidth, the phase shift between this signal and the feedback signal might seriously degenerate the performance of the approximation. The use of phase correction overcomes this problem. This is validated by a set of simulations and experiments that show the necessity of the phase corrected scheme

    Relaxing Fundamental Assumptions in Iterative Learning Control

    Full text link
    Iterative learning control (ILC) is perhaps best decribed as an open loop feedforward control technique where the feedforward signal is learned through repetition of a single task. As the name suggests, given a dynamic system operating on a finite time horizon with the same desired trajectory, ILC aims to iteratively construct the inverse image (or its approximation) of the desired trajectory to improve transient tracking. In the literature, ILC is often interpreted as feedback control in the iteration domain due to the fact that learning controllers use information from past trials to drive the tracking error towards zero. However, despite the significant body of literature and powerful features, ILC is yet to reach widespread adoption by the control community, due to several assumptions that restrict its generality when compared to feedback control. In this dissertation, we relax some of these assumptions, mainly the fundamental invariance assumption, and move from the idea of learning through repetition to two dimensional systems, specifically repetitive processes, that appear in the modeling of engineering applications such as additive manufacturing, and sketch out future research directions for increased practicality: We develop an L1 adaptive feedback control based ILC architecture for increased robustness, fast convergence, and high performance under time varying uncertainties and disturbances. Simulation studies of the behavior of this combined L1-ILC scheme under iteration varying uncertainties lead us to the robust stability analysis of iteration varying systems, where we show that these systems are guaranteed to be stable when the ILC update laws are designed to be robust, which can be done using existing methods from the literature. As a next step to the signal space approach adopted in the analysis of iteration varying systems, we shift the focus of our work to repetitive processes, and show that the exponential stability of a nonlinear repetitive system is equivalent to that of its linearization, and consequently uniform stability of the corresponding state space matrix.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133232/1/altin_1.pd

    Optimal Spectrum Access for Cognitive Radios

    Full text link
    In this paper, we investigate a time-slotted cognitive setting with buffered primary and secondary users. In order to alleviate the negative effects of misdetection and false alarm probabilities, a novel design of spectrum access mechanism is proposed. We propose two schemes. First, the SU senses primary channel to exploit the periods of silence, if the PU is declared to be idle, the SU randomly accesses the channel with some access probability asa_s. Second, in addition to accessing the channel if the PU is idle, the SU possibly accesses the channel if it is declared to be busy with some access probability bsb_s. The access probabilities as function of the misdetection, false alarm and average primary arrival rate are obtained via solving an optimization problem designed to maximize the secondary service rate given a constraint on primary queue stability. In addition, we propose a variable sensing duration schemes where the SU optimizes over the optimal sensing time to achieve the maximum stable throughput of the network. The results reveal the performance gains of the proposed schemes over the conventional sensing scheme. We propose a method to estimate the mean arrival rate and the outage probability of the PU based on the primary feedback channel, i.e., acknowledgments (ACKs) and negative-acknowledgments (NACKs) messages.Comment: arXiv admin note: substantial text overlap with arXiv:1206.615
    • …
    corecore