230,524 research outputs found

    State-Space Interpretation of Model Predictive Control

    Get PDF
    A model predictive control technique based on a step response model is developed using state estimation techniques. The standard step response model is extended so that integrating systems can be treated within the same framework. Based on the modified step response model, it is shown how the state estimation techniques from stochastic optimal control can be used to construct the optimal prediction vector without introducing significant additional numerical complexity. In the case of integrated or double integrated white noise disturbances filtered through general first-order dynamics and white measurement noise, the optimal filter gain is parametrized explicitly in terms of a single parameter between 0 and 1, thus removing the requirement for solving a Riccati equation and equipping the control system with useful on-line tuning parameters. Parallels are drawn to the existing MPC techniques such as Dynamic Matrix Control (DMC), Internal Model Control (IMC) and Generalized Predictive Control (GPC)

    Predictive feedback control and Fitts' law

    Get PDF
    Fitts’ law is a well established empirical formula, known for encapsulating the “speed-accuracy trade-off”. For discrete, manual movements from a starting location to a target, Fitts’ law relates movement duration to the distance moved and target size. The widespread empirical success of the formula is suggestive of underlying principles of human movement control. There have been previous attempts to relate Fitts’ law to engineering-type control hypotheses and it has been shown that the law is exactly consistent with the closed-loop step-response of a time-delayed, first-order system. Assuming only the operation of closed-loop feedback, either continuous or intermittent, this paper asks whether such feedback should be predictive or not predictive to be consistent with Fitts law. Since Fitts’ law is equivalent to a time delay separated from a first-order system, known control theory implies that the controller must be predictive. A predictive controller moves the time-delay outside the feedback loop such that the closed-loop response can be separated into a time delay and rational function whereas a non- predictive controller retains a state delay within feedback loop which is not consistent with Fitts’ law. Using sufficient parameters, a high-order non-predictive controller could approximately reproduce Fitts’ law. However, such high-order, “non-parametric” controllers are essentially empirical in nature, without physical meaning, and therefore are conceptually inferior to the predictive controller. It is a new insight that using closed-loop feedback, prediction is required to physically explain Fitts’ law. The implication is that prediction is an inherent part of the “speed-accuracy trade-off”

    Bayesian Updating, Model Class Selection and Robust Stochastic Predictions of Structural Response

    Get PDF
    A fundamental issue when predicting structural response by using mathematical models is how to treat both modeling and excitation uncertainty. A general framework for this is presented which uses probability as a multi-valued conditional logic for quantitative plausible reasoning in the presence of uncertainty due to incomplete information. The fundamental probability models that represent the structure’s uncertain behavior are specified by the choice of a stochastic system model class: a set of input-output probability models for the structure and a prior probability distribution over this set that quantifies the relative plausibility of each model. A model class can be constructed from a parameterized deterministic structural model by stochastic embedding utilizing Jaynes’ Principle of Maximum Information Entropy. Robust predictive analyses use the entire model class with the probabilistic predictions of each model being weighted by its prior probability, or if structural response data is available, by its posterior probability from Bayes’ Theorem for the model class. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of competing candidates weighted by the prior or posterior probability of the model class, the latter being computed from Bayes’ Theorem. This higherlevel application of Bayes’ Theorem automatically applies a quantitative Ockham razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust predictive analyses involve integrals over highdimensional spaces that usually must be evaluated numerically. Published applications have used Laplace's method of asymptotic approximation or Markov Chain Monte Carlo algorithms

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Predict or classify: The deceptive role of time-locking in brain signal classification

    Full text link
    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.Comment: 23 pages, 5 figure

    Finite-time behavior of inner systems

    Get PDF
    In this paper, we investigate how nonminimum phase characteristics of a dynamical system affect its controllability and tracking properties. For the class of linear time-invariant dynamical systems, these characteristics are determined by transmission zeros of the inner factor of the system transfer function. The relation between nonminimum phase zeros and Hankel singular values of inner systems is studied and it is shown how the singular value structure of a suitably defined operator provides relevant insight about system invertibility and achievable tracking performance. The results are used to solve various tracking problems both on finite as well as on infinite time horizons. A typical receding horizon control scheme is considered and new conditions are derived to guarantee stabilizability of a receding horizon controller
    • 

    corecore