2,236 research outputs found

    A Parallel Dual Fast Gradient Method for MPC Applications

    Full text link
    We propose a parallel adaptive constraint-tightening approach to solve a linear model predictive control problem for discrete-time systems, based on inexact numerical optimization algorithms and operator splitting methods. The underlying algorithm first splits the original problem in as many independent subproblems as the length of the prediction horizon. Then, our algorithm computes a solution for these subproblems in parallel by exploiting auxiliary tightened subproblems in order to certify the control law in terms of suboptimality and recursive feasibility, along with closed-loop stability of the controlled system. Compared to prior approaches based on constraint tightening, our algorithm computes the tightening parameter for each subproblem to handle the propagation of errors introduced by the parallelization of the original problem. Our simulations show the computational benefits of the parallelization with positive impacts on performance and numerical conditioning when compared with a recent nonparallel adaptive tightening scheme.Comment: This technical report is an extended version of the paper "A Parallel Dual Fast Gradient Method for MPC Applications" by the same authors submitted to the 54th IEEE Conference on Decision and Contro

    Lattice Quantum Gravity and Asymptotic Safety

    Full text link
    We study the nonperturbative formulation of quantum gravity defined via Euclidean dynamical triangulations (EDT) in an attempt to make contact with Weinberg's asymptotic safety scenario. We find that a fine-tuning is necessary in order to recover semiclassical behavior. Such a fine-tuning is generally associated with the breaking of a target symmetry by the lattice regulator; in this case we argue that the target symmetry is the general coordinate invariance of the theory. After introducing and fine-tuning a nontrivial local measure term, we find no barrier to taking a continuum limit, and we find evidence that four-dimensional, semiclassical geometries are recovered at long distance scales in the continuum limit. We also find that the spectral dimension at short distance scales is consistent with 3/2, a value that could resolve the tension between asymptotic safety and the holographic entropy scaling of black holes. We argue that the number of relevant couplings in the continuum theory is one, once symmetry breaking by the lattice regulator is accounted for. Such a theory is maximally predictive, with no adjustable parameters. The cosmological constant in Planck units is the only relevant parameter, which serves to set the lattice scale. The cosmological constant in Planck units is of order 1 in the ultraviolet and undergoes renormalization group running to small values in the infrared. If these findings hold up under further scrutiny, the lattice may provide a nonperturbative definition of a renormalizable quantum field theory of general relativity with no adjustable parameters and a cosmological constant that is naturally small in the infrared.Comment: 69 pages, 25 figures. Revised discussion of target symmetry throughout paper. Numerical results unchanged and main conclusions largely unchanged. Added references and corrected typos. Conforms with version published in Phys. Rev.

    Sampling from a system-theoretic viewpoint

    Get PDF
    This paper studies a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. \ud \ud The paper is split into three parts. In Part I we present the paradigm and revise the lifting technique, which is our main technical tool. In Part II optimal samplers and holds are designed for various analog signal reconstruction problems. In some cases one component is fixed while the remaining are designed, in other cases all three components are designed simultaneously. No causality requirements are imposed in Part II, which allows to use frequency domain arguments, in particular the lifted frequency response as introduced in Part I. In Part III the main emphasis is placed on a systematic incorporation of causality constraints into the optimal design of reconstructors. We consider reconstruction problems, in which the sampling (acquisition) device is given and the performance is measured by the L2L^2-norm of the reconstruction error. The problem is solved under the constraint that the optimal reconstructor is ll-causal for a given l0,l\geq 0, i.e., that its impulse response is zero in the time interval (,lh),(-\infty,-l h), where hh is the sampling period. We derive a closed-form state-space solution of the problem, which is based on the spectral factorization of a rational transfer function

    Jet array impingement flow distributions and heat transfer characteristics. Effects of initial crossflow and nonuniform array geometry

    Get PDF
    Two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate are considered. The jet flow, after impingement, is constrained to exit in a single direction along the channel formed by the jet orifice plate and the heat transfer surface. The configurations considered are intended to model those of interest in current and contemplated gas turbine airfoil midchord cooling applications. The effects of an initial crossflow which approaches the array through an upstream extension of the channel are considered. Flow distributions as well as heat transfer coefficients and adiabatic wall temperatures resolved to one streamwise hole spacing were measured as a function of the initial crossflow rate and temperature relative to the jet flow rate and temperature. Both Nusselt number profiles and dimensionless adiabatic wall temperature (effectiveness) profiles are presented and discussed. Special test results which show a significant reduction of jet orifice discharge coefficients owing to the effect of a confined crossflow are also presented, along with a flow distribution model which incorporates those effects. A nonuniform array flow distribution model is developed and validated

    Model predictive emissions control of a diesel engine airpath: Design and experimental evaluation

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/163480/2/rnc5188.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/163480/1/rnc5188_am.pd

    Whither discrete time model predictive control?

    Get PDF
    This note proposes an efficient computational procedure for the continuous time, input constrained, infinite horizon, linear quadratic regulator problem (CLQR). To ensure satisfaction of the constraints, the input is approximated as a piecewise linear function on a finite time discretization. The solution of this approximate problem is a standard quadratic program. A novel lower bound on the infinite dimensional CLQR problem is developed, and the discretization is adaptively refined until a user supplied error tolerance on the CLQR cost is achieved. The offline storage of the required quadrature matrices at several levels of discretization tailors the method for online use as required in model predictive control (MPC). The performance of the proposed algorithm is then compared with the standard discrete time MPC algorithms. The proposed method is shown to be significantly more efficient than standard discrete time MPC that uses a sample time short enough to generate a cost close to the CLQR solution

    Source Coding Optimization for Distributed Average Consensus

    Full text link
    Consensus is a common method for computing a function of the data distributed among the nodes of a network. Of particular interest is distributed average consensus, whereby the nodes iteratively compute the sample average of the data stored at all the nodes of the network using only near-neighbor communications. In real-world scenarios, these communications must undergo quantization, which introduces distortion to the internode messages. In this thesis, a model for the evolution of the network state statistics at each iteration is developed under the assumptions of Gaussian data and additive quantization error. It is shown that minimization of the communication load in terms of aggregate source coding rate can be posed as a generalized geometric program, for which an equivalent convex optimization can efficiently solve for the global minimum. Optimization procedures are developed for rate-distortion-optimal vector quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform quantization. Numerical results demonstrate the performance of these approaches. For small numbers of iterations, the fixed-rate optimizations are verified using exhaustive search. Comparison to the prior art suggests competitive performance under certain circumstances but strongly motivates the incorporation of more sophisticated coding strategies, such as differential, predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State Universit
    corecore