15,390 research outputs found

    A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints

    Get PDF
    A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converge to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers

    DYNAMIC PROGRAMMING: HAS ITS DAY ARRIVED?

    Get PDF
    Research Methods/ Statistical Methods,

    Contact-Implicit Trajectory Optimization Based on a Variable Smooth Contact Model and Successive Convexification

    Full text link
    In this paper, we propose a contact-implicit trajectory optimization (CITO) method based on a variable smooth contact model (VSCM) and successive convexification (SCvx). The VSCM facilitates the convergence of gradient-based optimization without compromising physical fidelity. On the other hand, the proposed SCvx-based approach combines the advantages of direct and shooting methods for CITO. For evaluations, we consider non-prehensile manipulation tasks. The proposed method is compared to a version based on iterative linear quadratic regulator (iLQR) on a planar example. The results demonstrate that both methods can find physically-consistent motions that complete the tasks without a meaningful initial guess owing to the VSCM. The proposed SCvx-based method outperforms the iLQR-based method in terms of convergence, computation time, and the quality of motions found. Finally, the proposed SCvx-based method is tested on a standard robot platform and shown to perform efficiently for a real-world application.Comment: Accepted for publication in ICRA 201

    Regularized Decomposition of High-Dimensional Multistage Stochastic Programs with Markov Uncertainty

    Full text link
    We develop a quadratic regularization approach for the solution of high-dimensional multistage stochastic optimization problems characterized by a potentially large number of time periods/stages (e.g. hundreds), a high-dimensional resource state variable, and a Markov information process. The resulting algorithms are shown to converge to an optimal policy after a finite number of iterations under mild technical assumptions. Computational experiments are conducted using the setting of optimizing energy storage over a large transmission grid, which motivates both the spatial and temporal dimensions of our problem. Our numerical results indicate that the proposed methods exhibit significantly faster convergence than their classical counterparts, with greater gains observed for higher-dimensional problems

    Calculating Value-at-Risk

    Get PDF
    The market risk of a portfolio refers to the possibility of financial loss due to the joint movement of systematic economic variables such as interest and exchange rates. Quantifying market risk is important to regulators in assessing solvency and to risk managers in allocating scarce capital. Moreover, market risk is often the central risk faced by financial institutions. The standard method for measuring market risk places a conservative, one-sided confidence interval on portfolio losses for short forecast horizons. This bound on losses is often called capital-at-risk or value-at-risk (VAR), for obvious reasons. Calculating the VAR or any similar risk metric requires a probability distribution of changes in portfolio value. In most risk management models, this distribution is derived by placing assumptions on (1) how the portfolio function is approximated, and (2) how the state variables are modeled. Using this framework, we first review four methods for measuring market risk. We then develop and illustrate two new market risk measurement models that use a second-order approximation to the portfolio function and a multivariate GARCH(l,1) model for the state variables. We show that when changes in the state variables are modeled as conditional or unconditional multivariate normal, first-order approximations to the portfolio function yield a univariate normal for the change in portfolio value while second-order approximations yield a quadratic normal. Using equity return data and a hypothetical portfolio of options, we then evaluate the performance of all six models by examining how accurately each calculates the VAR on an out-of-sample basis. We find that our most general model is superior to all others in predicting the VAR. In additional empirical tests focusing on the error contribution of each of the two model components, we find that the superior performance of our most general model is largely attributable to the use of the second-order approximation, and that the first-order approximations favored by practitioners perform quite poorly. Empirical evidence on the modeling of the state variables is mixed but supports usage of a model which reflects non-linearities in state variable return distributions. This paper was presented at the Financial Institutions Center's October 1996 conference on "

    Newton-based maximum likelihood estimation in nonlinear state space models

    Full text link
    Maximum likelihood (ML) estimation using Newton's method in nonlinear state space models (SSMs) is a challenging problem due to the analytical intractability of the log-likelihood and its gradient and Hessian. We estimate the gradient and Hessian using Fisher's identity in combination with a smoothing algorithm. We explore two approximations of the log-likelihood and of the solution of the smoothing problem. The first is a linearization approximation which is computationally cheap, but the accuracy typically varies between models. The second is a sampling approximation which is asymptotically valid for any SSM but is more computationally costly. We demonstrate our approach for ML parameter estimation on simulated data from two different SSMs with encouraging results.Comment: 17 pages, 2 figures. Accepted for the 17th IFAC Symposium on System Identification (SYSID), Beijing, China, October 201
    • …
    corecore