1,141 research outputs found

    A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    Get PDF
    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is not assumed that the iterates lie on a compact setThis research was supported by National Science Foundation grant DDMo9204208, Department of Energy grant DE-FG03-92ER25117, Office of Naval Research grant N00014-90-J-1242, and the Bank of SpainPublicad

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    First-order sequential convex programming using approximate diagonal QP subproblems

    Get PDF
    Optimization algorithms based on convex separable approximations for optimal structural design often use reciprocal-like approximations in a dual setting; CONLIN and the method of moving asymptotes (MMA) are well-known examples of such sequential convex programming (SCP) algorithms. We have previously demonstrated that replacement of these nonlinear (reciprocal) approximations by their own second order Taylor series expansion provides a powerful new algorithmic option within the SCP class of algorithms. This note shows that the quadratic treatment of the original nonlinear approximations also enables the restatement of the SCP as a series of Lagrange-Newton QP subproblems. This results in a diagonal trust-region SQP type of algorithm, in which the second order diagonal terms are estimated from the nonlinear (reciprocal) intervening variables, rather than from historic information using an exact or a quasi-Newton Hessian approach. The QP formulation seems particularly attractive for problems with far more constraints than variables (when pure dual methods are at a disadvantage), or when both the number of design variables and the number of (active) constraints is very large

    Mobile Unmanned Aerial Vehicles (UAVs) for Energy-Efficient Internet of Things Communications

    Get PDF
    In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, is investigated. In particular, to enable reliable uplink communications for IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the three-dimensional (3D) placement and mobility of the UAVs, device-UAV association, and uplink power control. First, given the locations of active IoT devices at each time instant, the optimal UAVs' locations and associations are determined. Next, to dynamically serve the IoT devices in a time-varying network, the optimal mobility patterns of the UAVs are analyzed. To this end, based on the activation process of the IoT devices, the time instances at which the UAVs must update their locations are derived. Moreover, the optimal 3D trajectory of each UAV is obtained in a way that the total energy used for the mobility of the UAVs is minimized while serving the IoT devices. Simulation results show that, using the proposed approach, the total transmit power of the IoT devices is reduced by 45% compared to a case in which stationary aerial base stations are deployed. In addition, the proposed approach can yield a maximum of 28% enhanced system reliability compared to the stationary case. The results also reveal an inherent tradeoff between the number of update times, the mobility of the UAVs, and the transmit power of the IoT devices. In essence, a higher number of updates can lead to lower transmit powers for the IoT devices at the cost of an increased mobility for the UAVs.Comment: Accepted in IEEE Transactions on Wireless Communications, Sept. 201

    On the conditional acceptance of iterates in SAO algorithms based on convex separable approximations

    Get PDF
    We reflect on the convergence and termination of optimization algorithms based on convex and separable approximations using two recently proposed strategies, namely a trust region with filtered acceptance of the iterates, and conservatism. We then propose a new strategy for convergence and termination, denoted filtered conservatism, in which the acceptance or rejection of an iterate is determined using the nonlinear acceptance filter. However, if an iterate is rejected, we increase the conservatism of every unconservative approximation, rather than reducing the trust region. Filtered conservatism aims to combine the salient features of trust region strategies with nonlinear acceptance filters on the one hand, and conservatism on the other. In filtered conservatism, the nonlinear acceptance filter is used to decide if an iterate is accepted or rejected. This allows for the acceptance of infeasible iterates, which would not be accepted in a method based on conservatism. If however an iterate is rejected, the trust region need not be decreased; it may be kept constant. Convergence is than effected by increasing the conservatism of only the unconservative approximations in the (large, constant) trust region, until the iterate becomes acceptable to the filter. Numerical results corroborate the accuracy and robustness of the method
    corecore