83 research outputs found

    A study of optimization and optimal control computation : exact penalty function approach

    Get PDF
    In this thesis, We propose new computational algorithms and methods for solving four classes of constrained optimization and optimal control problems. In Chapter 1, we present a brief review on optimization and optimal control. In Chapter 2, we consider a class of continuous inequality constrained optimization problems. The continuous inequality constraints are first approximated by smooth function in integral form. Then, we construct a new exact penalty function, where the summation of all these approximate smooth functions in integral form, called the constraint violation, is appended to the objective function. In this way, we obtain a sequence of approximate unconstrained optimization problems. It is shown that if the value of the penalty parameter is sufficiently large, then any local minimizer of the corresponding unconstrained optimization problem is a local minimizer of the original problem. For illustration, three examples are solved using the proposed method.From the solutions obtained, we observe that the values of their objective functions are amongst the smallest when compared with those obtained by other existing methods available in the literature. More importantly, our method finds solutions which satisfy the continuous inequality constraints.In Chapter 3, we consider a general class of nonlinear mixed discrete programming problems. By introducing continuous variables to replace the discrete variables, the problem is first transformed into an equivalent nonlinear continuous optimization problem subject to original constraints and additional linear and quadratic constraints. However, the existing gradient-based optimization techniques have difficulty to solve this equivalent nonlinear optimization problem effectively due to the new quadratic inequality constraint. Thus, an exact penalty function is employed to construct a sequence of unconstrained optimization problems, each of which can be solved effectively by unconstrained optimization techniques, such as conjugate gradient or quasi-Newton types of methods.It is shown that any local optimal solution of the unconstrained optimization problem is a local optimal solution of the transformed nonlinear constrained continuous optimization problem when the penalty parameter is sufficiently large. Numerical experiments are carried out to test the efficiency of the proposed method.In Chapter 4, we investigate the optimal design of allpass variable fractional delay (VFD) filters with coefficients expressed as sums of signed powers-of-two terms, where the weighted integral squared error is minimized. A new optimization procedure is proposed to generate a reduced discrete search region. Then, a new exact penalty function method is developed to solve the optimal design of allpass variable fractional delay filter with signed powers-of-two coefficients. Design examples show that the proposed method is highly effective. Compared with the conventional quantization method, the solutions obtained by our method are of much higher accuracy. Furthermore, the computational complexity is low.In Chapter 5, we consider an optimal control problem in which the control takes values from a discrete set and the state and control are subject to continuous inequality constraints. By introducing auxiliary controls and applying a time-scaling transformation, we transform this optimal control problem into an equivalent optimal control problem subject to original constraints and additional linear and quadratic constraints, where the decision variables are taking values from a feasible region, which is the union of some continuous sets. However, due to the new quadratic constraints, standard optimization techniques do not perform well when they are applied to solve the transformed problem directly.We introduce a novel exact penalty function to penalize constraint violations, and then append this penalty function to the objective function, forming a penalized objective function. This leads to a sequence of approximate optimal control problems, each of which can be solved by using optimal control techniques, and consequently, many optimal control software packages, such as MISER 3.4, can be used. Convergence results how that when the penalty parameter is sufficiently large, any local solution of the approximate problem is also a local solution of the original problem. We conclude this chapter with some numerical results for two train control problems.In Chapter 6, some concluding remarks and suggestions for future research directions are made

    Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties

    Full text link
    This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A brief version of this paper is published in 2016 IEEE 55th Conference on Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816

    A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints

    Get PDF
    We propose a new algorithm to solve optimization problems of the form minf(X)\min f(X) for a smooth function ff under the constraints that XX is positive semidefinite and the diagonal blocks of XX are small identity matrices. Such problems often arise as the result of relaxing a rank constraint (lifting). In particular, many estimation tasks involving phases, rotations, orthonormal bases or permutations fit in this framework, and so do certain relaxations of combinatorial problems such as Max-Cut. The proposed algorithm exploits the facts that (1) such formulations admit low-rank solutions, and (2) their rank-restricted versions are smooth optimization problems on a Riemannian manifold. Combining insights from both the Riemannian and the convex geometries of the problem, we characterize when second-order critical points of the smooth problem reveal KKT points of the semidefinite problem. We compare against state of the art, mature software and find that, on certain interesting problem instances, what we call the staircase method is orders of magnitude faster, is more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
    corecore