1,193,644 research outputs found

    Large Deployable Reflector (LDR) feasibility study update

    Get PDF
    In 1982 a workshop was held to refine the science rationale for large deployable reflectors (LDR) and develop technology requirements that support the science rationale. At the end of the workshop, a set of LDR consensus systems requirements was established. The subject study was undertaken to update the initial LDR study using the new systems requirements. The study included mirror materials selection and configuration, thermal analysis, structural concept definition and analysis, dynamic control analysis and recommendations for further study. The primary emphasis was on the dynamic controls requirements and the sophistication of the controls system needed to meet LDR performance goals

    Bellman equation and viscosity solutions for mean-field stochastic control problem

    Get PDF
    We consider the stochastic optimal control problem of McKean-Vlasov stochastic differential equation where the coefficients may depend upon the joint law of the state and control. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution of the process as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to pro\-bability measures recently introduced by P.L. Lions in [32], and a special It{\^o} formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem, and prove a veri\-fication theorem in our McKean-Vlasov framework. We give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. We also consider a notion of lifted visc-sity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean-Vlasov control problem. Finally, we consider the case of McKean-Vlasov control problem with open-loop controls and discuss the associated dynamic programming equation that we compare with the case of closed-loop controls.Comment: to appear in ESAIM: COC

    Verification theorem and construction of epsilon-optimal controls for control of abstract evolution equations

    Get PDF
    We study several aspects of the dynamic programming approach to optimal control of abstract evolution equations, including a class of semilinear partial differential equations. We introduce and prove a verification theorem which provides a sufficient condition for optimality. Moreover we prove sub- and superoptimality principles of dynamic programming and give an explicit construction of ϵ\epsilon-optimal controls.optimal control of PDE; verification theorem; dynamic programming; ϵ\epsilon-optimal controls; Hamilton-Jacobi-Bellman equations

    A Theory of Capital Controls as Dynamic Terms-of-Trade Manipulation

    Get PDF
    This paper develops a simple theory of capital controls as dynamic terms-of-trade manipulation. We study an infinite horizon endowment economy with two countries. One country chooses taxes on international capital flows in order to maximize the welfare of its representative agent, while the other country is passive. We show that capital controls are not guided by the absolute desire to alter the intertemporal price of the goods produced in any given period, but rather by the relative strength of this desire between two consecutive periods. Specifically, it is optimal for the strategic country to tax capital inflows (or subsidize capital outflows) if it grows faster than the rest of the world and to tax capital outflows (or subsidize capital inflows) if it grows more slowly. In the long-run, if relative endowments converge to a steady state, taxes on international capital flows converge to zero. Although our theory emphasizes interest rate manipulation, the country's net financial position per se is irrelevant.

    Decentralized Learning for Optimality in Stochastic Dynamic Teams and Games with Local Control and Global State Information

    Full text link
    Stochastic dynamic teams and games are rich models for decentralized systems and challenging testing grounds for multi-agent learning. Previous work that guaranteed team optimality assumed stateless dynamics, or an explicit coordination mechanism, or joint-control sharing. In this paper, we present an algorithm with guarantees of convergence to team optimal policies in teams and common interest games. The algorithm is a two-timescale method that uses a variant of Q-learning on the finer timescale to perform policy evaluation while exploring the policy space on the coarser timescale. Agents following this algorithm are "independent learners": they use only local controls, local cost realizations, and global state information, without access to controls of other agents. The results presented here are the first, to our knowledge, to give formal guarantees of convergence to team optimality using independent learners in stochastic dynamic teams and common interest games

    Quick-response servo amplifies small hydraulic pressure differences

    Get PDF
    Hydraulic servo, which quickly diverts fluid to either of two actuators, controls the flow rates and pressures within a hydraulic system so that the output force of the servo system is independent of the velocity of the mechanism which the system actuates. This servo is a dynamic feedback control device
    corecore