1,090 research outputs found

    Online constraint removal: Accelerating MPC with a Lyapunov function

    Get PDF
    We show how to use a Lyapunov function to accelerate MPC for linear discrete-time systems with linear constraints and quadratic cost. Our method predicts, in the current time step, which constraints will be inactive in the next time step. These constraints can be removed from the online optimization problem of the next time step. The criterion for the detection of inactive constraints is based on the decrease of the Lyapunov function along the trajectory of the controlled system. The criterion is simple, easy to implement in existing MPC algorithms, and its computational cost is small

    Assessing the speedup achievable by online constraint removal in MPC

    Get PDF
    We recently proposed to accelerate online MPC calculations by detecting and removing inactive constraints from the online optimization problems as a function of the current initial state. A number of variants of constraint removal (CR) have been explored, ranging from detecting inactive constraints based on precomputed regions of activity or approximations thereof to online methods that do not require any offline preparation. In typical applications CR can reduce the computing times required for the calculation of the model predictive control laws by 15% to 90%. Since CR is very easy to implement, does not require any additional assumptions to be fulfilled beyond the usual ones for stability, and can be combined with all optimization algorithms, it is very easy to cash in the described acceleration. Moreover, CR may prove useful if an existing, established MPC implementation needs to be accelerated, e.g., in order to use it on an embedded processor, but replacing it altogether is not an option

    Accelerating linear model predictive control by constraint removal

    Get PDF
    Model predictive control (MPC) is computationally expensive, because it is based on solving an optimal control problem in every time step. We show how to reduce the computational cost of linear discrete-time MPC by detecting and removing inactive constraints from the optimal control problem. State of the art MPC implementations detect constraints that are inactive for all times and all initial conditions and remove these from the underlying optimization problem. Our approach, in contrast, detects constraints that become inactive as a function of time. More specifically, we show how to find a bound Ï\u83iâ\u98\u86for each constraint i, such that a Lyapunov function value below Ï\u83iâ\u98\u86implies constraint i is inactive. Since the bounds Ï\u83iâ\u98\u86are independent of states and inputs, they can be determined offline. The proposed approach is easy to implement, requires simple and affordable preparatory calculations, and it does not depend on the details of the underlying optimization algorithm. We apply it to two sample MPC problems of different size. The computational cost can be reduced considerably in both cases

    On the maximal controller gain in linear MPC

    Get PDF
    The paper addresses the computation of Lipschitz constants for model predictive control (MPC) laws. Such Lipschitz constants are useful to assess the inherent robustness of nominal MPC for disturbed systems. It is shown that a Lipschitz constant can be computed by identifying the maximal controller gain of the MPC. Clearly, given the explicit description of the MPC, this gain can be easily identified. The computation of the explicit MPC may, however, be numerically demanding. The goal of the paper thus is to overestimate the maximal controller gain without using the explicit control law

    Polarized tip-enhanced Raman spectroscopy at liquid He temperature in ultrahigh vacuum using an off-axis parabolic mirror

    Full text link
    Tip-enhanced Raman spectroscopy (TERS) combines inelastic light scattering well below the diffraction limit down to the nanometer range and scanning probe microscopy and, possibly, spectroscopy. In this way, topographic and spectroscopic as well as single- and two-particle information may simultaneously be collected. While single molecules can now be studied successfully, bulk solids are still not meaningfully accessible. It is the purpose of the work presented here to outline approaches toward this objective. We describe a home-built, liquid helium cooled, ultrahigh vacuum tip-enhanced Raman spectroscopy system (LHe-UHV-TERS). The setup is based on a scanning tunneling microscope and, as an innovation, an off-axis parabolic mirror having a high numerical aperture of approximately 0.850.85 and a large working distance. The system is equipped with a fast load-lock chamber, a chamber for the \textit{in situ} preparation of tips, substrates, and samples, and a TERS chamber. Base pressure and temperature in the TERS chamber were approximately 3×10113\times 10^{-11}~mbar and 15~K, respectively. Polarization dependent tip-enhanced Raman spectra of the vibration modes of carbon nanotubes were successfully acquired at cryogenic temperature. Enhancement factors in the range of 10710^7 were observed. The new features described here including very low pressure and temperature and the external access to the light polarizations, thus the selection rules, may pave the way towards the investigation of bulk and surface materials.Comment: 11pages,7figure

    Evaluating the Suitability of Commercial Clouds for NASA's High Performance Computing Applications: A Trade Study

    Get PDF
    NASAs High-End Computing Capability (HECC) Project is periodically asked if it could be more cost effective through the use of commercial cloud resources. To answer the question, HECCs Application Performance and Productivity (APP) team undertook a performance and cost evaluation comparing three domains: two commercial cloud providers, Amazon and Penguin, and HECCs in-house resourcesthe Pleiades and Electra systems. In the study, the APP team used a combination of the NAS Parallel Benchmarks (NPB) and six full applications from NASAs workload on Pleiades and Electra to compare performance of nodes based on three different generations of Intel Xeon processorsHaswell, Broadwell, and Skylake. Because of export control limitations, the most heavily used applications on Pleiades and Electra could not be used in the cloud; therefore, only one of the applications, OpenFOAM, represents work from the Aeronautics Research Mission Directorate and the Human and Exploration Mission Directorate. The other five applications are from the Science Mission Directorate

    Special fast diffusion with slow asymptotics. Entropy method and flow on a Riemannian manifold

    Full text link
    We consider the asymptotic behaviour of positive solutions u(t,x)u(t,x) of the fast diffusion equation ut=Δ(um/m)=div(um1u)u_t=\Delta (u^{m}/m)={\rm div} (u^{m-1}\nabla u) posed for x\in\RR^d, t>0t>0, with a precise value for the exponent m=(d4)/(d2)m=(d-4)/(d-2). The space dimension is d3d\ge 3 so that m<1m<1, and even m=1m=-1 for d=3d=3. This case had been left open in the general study \cite{BBDGV} since it requires quite different functional analytic methods, due in particular to the absence of a spectral gap for the operator generating the linearized evolution. The linearization of this flow is interpreted here as the heat flow of the Laplace-Beltrami operator of a suitable Riemannian Manifold (\RR^d,{\bf g}), with a metric g{\bf g} which is conformal to the standard \RR^d metric. Studying the pointwise heat kernel behaviour allows to prove {suitable Gagliardo-Nirenberg} inequalities associated to the generator. Such inequalities in turn allow to study the nonlinear evolution as well, and to determine its asymptotics, which is identical to the one satisfied by the linearization. In terms of the rescaled representation, which is a nonlinear Fokker--Planck equation, the convergence rate turns out to be polynomial in time. This result is in contrast with the known exponential decay of such representation for all other values of mm.Comment: 37 page
    corecore