13,283 research outputs found

    Backward Stackelberg Differential Game with Constraints: a Mixed Terminal-Perturbation and Linear-Quadratic Approach

    Full text link
    We discuss an open-loop backward Stackelberg differential game involving single leader and single follower. Unlike most Stackelberg game literature, the state to be controlled is characterized by a backward stochastic differential equation (BSDE) for which the terminal- instead initial-condition is specified as a priori; the decisions of leader consist of a static terminal-perturbation and a dynamic linear-quadratic control. In addition, the terminal control is subject to (convex-closed) pointwise and (affine) expectation constraints. Both constraints are arising from real applications such as mathematical finance. For information pattern: the leader announces both terminal and open-loop dynamic decisions at the initial time while takes account the best response of follower. Then, two interrelated optimization problems are sequentially solved by the follower (a backward linear-quadratic (BLQ) problem) and the leader (a mixed terminal-perturbation and backward-forward LQ (BFLQ) problem). Our open-loop Stackelberg equilibrium is represented by some coupled backward-forward stochastic differential equations (BFSDEs) with mixed initial-terminal conditions. Our BFSDEs also involve nonlinear projection operator (due to pointwise constraint) combining with a Karush-Kuhn-Tucker (KKT) system (due to expectation constraint) via Lagrange multiplier. The global solvability of such BFSDEs is also discussed in some nontrivial cases. Our results are applied to one financial example.Comment: 38 page

    Continuous viscosity solutions to linear-quadratic stochastic control problems with singular terminal state constraint

    Get PDF
    This paper establishes the existence of a unique nonnegative continuous viscosity solution to the HJB equation associated with a Markovian linear-quadratic control problems with singular terminal state constraint and possibly unbounded cost coefficients. The existence result is based on a novel comparison principle for semi-continuous viscosity sub- and supersolutions for PDEs with singular terminal value. Continuity of the viscosity solution is enough to carry out the verification argument

    Stochastic maximum principle and dynamic convex duality in continuous-time constrained portfolio optimization

    Get PDF
    This thesis seeks to gain further insight into the connection between stochastic optimal control and forward and backward stochastic differential equations and its applications in solving continuous-time constrained portfolio optimization problems. Three topics are studied in this thesis. In the first part of the thesis, we focus on stochastic maximum principle, which seeks to establish the connection between stochastic optimal control and backward stochastic differential differential equations coupled with static optimality condition on the Hamiltonian. We prove a weak neccessary and sufficient maximum principle for Markovian regime switching stochastic optimal control problems. Instead of insisting on the maxi- mum condition of the Hamiltonian, we show that 0 belongs to the sum of Clarkes generalized gradient of the Hamiltonian and Clarkes normal cone of the control constraint set at the optimal control. Under a joint concavity condition on the Hamiltonian and a convexity condition on the terminal objective function, the necessary condition becomes sufficient. We give four examples to demonstrate the weak stochastic maximum principle. In the second part of the thesis, we study a continuous-time stochastic linear quadratic control problem arising from mathematical finance. We model the asset dynamics with random market coefficients and portfolio strategies with convex constraints. Following the convex duality approach,we show that the necessary and sufficient optimality conditions for both the primal and dual problems can be written in terms of processes satisfying a system of FBSDEs together with other conditions. We characterise explicitly the optimal wealth and portfolio processes as functions of adjoint processes from the dual FBSDEs in a dynamic fashion and vice versa. We apply the results to solve quadratic risk minimization problems with cone-constraints and derive the explicit representations of solutions to the extended stochastic Riccati equations for such problems. In the final section of the thesis, we extend the previous result to utility maximization problems. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint processes coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also find that the optimal primal wealth process coincides with the optimal adjoint process of the dual problem and vice versa. Finally we solve three constrained utility maximization problems and contrasts the simplicity of the duality approach we propose with the technical complexity in solving the primal problem directly.Open Acces

    An Improved Constraint-Tightening Approach for Stochastic MPC

    Full text link
    The problem of achieving a good trade-off in Stochastic Model Predictive Control between the competing goals of improving the average performance and reducing conservativeness, while still guaranteeing recursive feasibility and low computational complexity, is addressed. We propose a novel, less restrictive scheme which is based on considering stability and recursive feasibility separately. Through an explicit first step constraint we guarantee recursive feasibility. In particular we guarantee the existence of a feasible input trajectory at each time instant, but we only require that the input sequence computed at time kk remains feasible at time k+1k+1 for most disturbances but not necessarily for all, which suffices for stability. To overcome the computational complexity of probabilistic constraints, we propose an offline constraint-tightening procedure, which can be efficiently solved via a sampling approach to the desired accuracy. The online computational complexity of the resulting Model Predictive Control (MPC) algorithm is similar to that of a nominal MPC with terminal region. A numerical example, which provides a comparison with classical, recursively feasible Stochastic MPC and Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201

    Stochastic Model Predictive Control with Discounted Probabilistic Constraints

    Full text link
    This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law to minimise a quadratic cost function subject to a chance constraint. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and ignoring violation probabilities in the far future, this form of constraint enables the feasibility of the online optimisation to be guaranteed without an assumption of boundedness of the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility based on knowledge of a suboptimal solution. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition.Comment: 6 pages, Conference Proceeding
    • …
    corecore