8 research outputs found

    Numerical Solution of Hamilton-Jacobi-Bellman Equations by an Upwind Finite Volume Method

    No full text
    In this paper we present a finite volume method for solving Hamilton-Jacobi-Bellman(HJB) equations governing a class of optimal feedback control problems. This method is based on a finite volume discretization in state space coupled with an upwind finite difference technique, and on an implicit backward Euler finite differencing in time, which is absolutely stable. It is shown that the system matrix of the resulting discrete equation is an M-matrix. To show the effectiveness of this approach, numerical experiments on test problems with up to three states and two control variables were performed. The numerical results show that the method yields accurate approximate solutions to both the control and the state variables.Department of Applied Mathematic

    Mathematical Models and Numerical Methods for Pricing Options on Investment Projects under Uncertainties

    Get PDF
    In this work, we focus on establishing partial differential equation (PDE) models for pricing flexibility options on investment projects under uncertainties and numerical methods for solving these models. we develop a finite difference method and an advanced fitted finite volume scheme and combine with an interior penalty method, as well as their convergence analyses, to solve the PDE and LCP models developed. The MATLAB program is for implementing testing the models of numerical algorithms developed

    A study of optimization problems involving stochastic systems with jumps

    Get PDF
    The optimization problems involving stochastic systems are often encountered in financial systems, networks design and routing, supply-chain management, actuarial science, telecommunications systems, statistical pattern recognition analysis associated with electronic commerce and medical diagnosis.This thesis aims to develop computational methods for solving three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps.In Chapter 1, a brief review on optimization problems involving stochastic systems with jumps is given. It is then followed by the introduction of three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps. These three stochastic optimization problems will be studied in detail in Chapters 2, 3 and 4, respectively. The literature reviews on optimization problems involving these three stochastic systems with jumps are presented in the last three sections of each of Chapters 2, 3 and 4, respectively.In Chapter 2, an optimization problem involving nonparametric regression with jump points is considered. A two-stage method is proposed for nonparametric regression with jump points. In the first stage, we identify the rough locations of all the possible jump points of the unknown regression function. In the second stage, we map the yet to be decided jump points into pre-assigned fixed points. In this way, the time domain is divided into several sections. Then the spline function is used to approximate each section of the unknown regression function. These approximation problems are formulated and subsequently solved as optimization problems. The inverse time scaling transformation is then carried out, giving rise to an approximation to the nonparametric regression with jump points. For illustration, several examples are solved by using this method. The result obtained are highly satisfactory.In Chapter 3, the optimization problem involving nonparametric regression with jump curves is studied. A two-stage method is presented to construct an approximating surface with jump location curve from a set of observed data which are corrupted with noise. In the first stage, we detect an estimate of the jump location curve in a surface. In the second stage, we shift the jump location curve into a row pixels or column pixels. The shifted region is then divided into two disjoint subregions by the jump location row pixels. These subregions are expanded to two overlapping expanded subregions, each of which includes the jump location row pixels. We calculate artificial values at these newly added pixels by using the observed data and then approximate the surface on each expanded subregions in which the artificial values at the pixels in the jump location row pixels for each expanded subregion. The curve with minimal distance between the two surfaces is chosen as the curve dividing the region. Subsequently, two nonoverlapping tensor product cubic spline surfaces are obtained. Then, by carrying out the inverse space scaling transformation, the two fitted smooth surfaces in the original space are obtained. For illustration, a numerical example is solved using the method proposed.In Chapter 4, a class of stochastic optimal parameter selection problems described by linear Ito stochastic differential equations with state jumps subject to probabilistic constraints on the state is considered, where the times at which the jumps occurred as well as their heights are decision variables. We show that this constrained stochastic impulsive optimal parameter selection problem is equivalent to a deterministic impulsive optimal parameter selection problem subject to continuous state inequality constraints, where the times at which the jumps occurred as well as their heights remain as decision variables. Then we show that this constrained deterministic impulsive optimal parameter selection problem can be transformed into an equivalent constrained deterministic impulsive optimal parameter selection problem with fixed jump times. We approximate the continuous state inequality constraints by a sequence of canonical inequality constraints. This leads to a sequence of approximate deterministic impulsive optimal parameter selection problems subject to canonical inequality constraints. For each of these approximate problems, we derive the gradient formulas of the cost function and the constraint functions. On this basis, an efficient computational method is developed. For illustration, a numerical example is solved.Finally, Chapter 5 contains some concluding remarks and suggestions for future studies

    Optimal guidance and control in space technology

    Get PDF
    In this thesis, we deal with several optimal guidance and control problems of the spacecrafts arising from the study of lunar exploration. The research is composed of three parts: 1. Optimal guidance for the lunar module soft landing, 2. Spacecraft attitude control system design basing on double gimbal control moment gyroscopes (DGCMGs), and 3. Synchronization motion control for a class of nonlinear system.To achieve a precise pinpoint lunar module soft landing, we first derive a three dimensional dynamics to describe the motion of the module for the powered descent part by introducing three coordinate frames with consideration of the moon rotation. Then, we move on to construct an optimal guidance law to achieve the lunar module soft landing which is treated as a continuously powered descent process with a constraint on the angle of the module between its longitudinal axis and the moon surface. When the module reaches the landing target, the terminal attitude of the module should be within an allowable small deviation from being vertical with reference to lunar surface. The fuel consumption and the terminal time should also be minimized. The optimal descent trajectory of the lunar module is calculated by using the control parameterization technique in conjunction with a time scaling transform. By these two methods, the optimal control problem is approximated by a sequence of optimal parameter selection problems which can be solved by existing gradient-based optimization methods. MISER 3.3, a general purpose optimal control software package, was developed based on these methods. We make use of this optimal control software package to solve our problem. The optimal trajectory tracking problem, where a desired trajectory is to be tracked with the least fuel consumption in the minimum time, is also considered and solved.With the consideration of some unpredicted situations, such as initial point perturbations, we move on to construct a nonlinear optimal feedback control law for the powered deceleration phase of the lunar module soft landing. The motion of the lunar module is described in the three dimensional coordinate system. Based on the nonlinear dynamics of the module, we obtain the form of an optimal closed loop control law, where a feedback gain matrix is involved. It is then shown that this feedback gain matrix satisfies a Riccati-like matrix differential equation. The optimal control problem is first solved as an open loop optimal control problem by using the time scaling transform and the control parameterization method. By virtue of the relationship between the optimal open loop control and the optimal closed loop control along the optimal trajectory, we present a practical method to calculate an approximate optimal feedback gain matrix, without having to solve an optimal control problem involving the complex Riccati-like matrix differential equation coupled with the original system dynamics.To realize the spacecraft large angle attitude maneuvers, we derive an exact general mathematical description of spacecraft attitude motion driven by DGCMGs system. Then, a nonlinear control law is designed based on the second method of Lyapunov and the stability of the attitude control system is established during the design process. A singularity robustness plus null motion steering law is designed to realize the control law. Principle of DGCMGsā€™ singularity is proved, and the singularity analysis of the orthogonally mounted three DGCMGs system and that of the parallel mounted four DGCMGs system are presented.Finally, we consider a new class of nonlinear optimal tracking and synchronizing control problems subject to control constraints, where the motions of two distinct objects are required to achieve synchronization at the minimum time while achieving the optimal tracking of a reference target. We first provide a rigorous mathematical formulation for this class of optimal control problems. A new result ensuring the synchronization of the two distinct objects is obtained. On this basis, a computational method is developed for constructing an optimal switching control law under which the motions of the two distinct objects will achieve synchronization at the minimum time while achieving the optimal tracking of a reference target. This computational method is developed based on novel applications of the control parameterization method and a time scaling transform. A practical problem arising from the study of the angular velocity tracking and synchronization of two spacecrafts during their formation flight is formulated and solved by the method proposed

    Optimal control problems involving constrained, switched, and delay systems

    Get PDF
    In this thesis, we develop numerical methods for solving five nonstandard optimal control problems. The main idea of each method is to reformulate the optimal control problem as, or approximate it by, a nonlinear programming problem. The decision variables in this nonlinear programming problem influence its cost function (and constraints, if it has any) implicitly through the dynamic system. Hence, deriving the gradient of the cost and the constraint functions is a difficult task. A major focus of this thesis is on developing methods for computing these gradients. These methods can then be used in conjunction with a gradient-based optimization technique to solve the optimal control problem efficiently.The first optimal control problem that we consider has nonlinear inequality constraints that depend on the state at two or more discrete time points. These time points are decision variables that, together with a control function, should be chosen in an optimal manner. To tackle this problem, we first approximate the control by a piecewise constant function whose values and switching times (the times at which it changes value) are decision variables. We then apply a novel time-scaling transformation that maps the switching times to fixed points in a new time horizon. This yields an approximate dynamic optimization problem with a finite number of decision variables. We develop a new algorithm, which involves integrating an auxiliary dynamic system forward in time, for computing the gradient of the cost and constraints in this approximate problem.The second optimal control problem that we consider has nonlinear continuous inequality constraints. These constraints restrict both the state and the control at every point in the time horizon. As with the first problem, we approximate the control by a piecewise constant function and then transform the time variable. This yields an approximate semi-infinite programming problem, which can be solved using a penalty function algorithm. A solution of this problem immediately furnishes a suboptimal control for the original optimal control problem. By repeatedly increasing the number of parameters used in the approximation, we can generate a sequence of suboptimal controls. Our main result shows that the cost of these suboptimal controls converges to the minimum cost.The third optimal control problem that we consider is an applied problem from electrical engineering. Its aim is to determine an optimal operating scheme for a switchedcapacitor DC-DC power converterā€”an electronic device that transforms one DC voltage into another by periodically switching between several circuit topologies. Specifically, the optimal control problem is to choose the times at which the topology switches occur so that the output voltage ripple is minimized and the load regulation is maximized. This problem is governed by a switched system with linear subsystems (each subsystem models one of the power converterā€™s topologies). Moreover, its cost function is non-smooth. By introducing an auxiliary dynamic system and transforming the time variable (so that the topology switching times become fixed), we derive an equivalent semi-infinite programming problem. This semi-infinite programming problem, like the one that approximates the continuously-constrained optimal control problem, can be solved using a penalty function algorithm.The fourth optimal control problem that we consider involves a general switched system, which includes the model of a switched-capacitor DC-DC power converter as a special case. This switched system evolves by switching between several subsystems of nonlinear ordinary differential equations. Furthermore, each subsystem switch is accompanied by an instantaneous change in the state. These instantaneous changesā€”so-called state jumpsā€”are influenced by control variables that, together with the subsystem switching times, should be selected in an optimal manner. As with the previous optimal control problems, we tackle this problem by transforming the time variable to obtain an equivalent problem in which the switching times are fixed. However, the functions governing the state jumps in this new problem are discontinuous. To overcome this difficulty, we introduce an approximate problem whose state jumps are governed by smooth functions. This approximate problem can be solved using a nonlinear programming algorithm. We prove an important convergence result that links the approximate problemā€™s solution with the original problemā€™s solution.The final optimal control problem that we consider is a parameter identification problem. The aim of this problem is to use given experimental data to identify unknown state-delays in a nonlinear delay-differential system. More precisely, the optimal control problem involves choosing the state-delays to minimize a cost function measuring the discrepancy between predicted and observed system output. We show that the gradient of this cost function can be computed by solving an auxiliary delay-differential system. On the basis of this result, the optimal control problem can be formulatedā€”and hence solvedā€”as a standard nonlinear programming problem

    Fuzzy EOQ Model with Trapezoidal and Triangular Functions Using Partial Backorder

    Get PDF
    EOQ fuzzy model is EOQ model that can estimate the cost from existing information. Using trapezoid fuzzy functions can estimate the costs of existing and trapezoid membership functions has some points that have a value of membership . TR ĢƒC value results of trapezoid fuzzy will be higher than usual TRC value results of EOQ model . This paper aims to determine the optimal amount of inventory in the company, namely optimal Q and optimal V, using the model of partial backorder will be known optimal Q and V for the optimal number of units each time a message . EOQ model effect on inventory very closely by using EOQ fuzzy model with triangular and trapezoid membership functions with partial backorder. Optimal Q and optimal V values for the optimal fuzzy models will have an increase due to the use of trapezoid and triangular membership functions that have a different value depending on the requirements of each membership function value. Therefore, by using a fuzzy model can solve the company's problems in estimating the costs for the next term
    corecore