991 research outputs found

    Generalized Gradient Method for Dynamic Linear Programming

    Get PDF
    A general scheme of application of nondifferentiable optimization methods (NDO) to dynamic linear programming (DLP) problems is considered

    A Non-Monotone Conjugate Subgradient Type Method for Minimization of Convex Functions

    Full text link
    We suggest a conjugate subgradient type method without any line-search for minimization of convex non differentiable functions. Unlike the custom methods of this class, it does not require monotone decrease of the goal function and reduces the implementation cost of each iteration essentially. At the same time, its step-size procedure takes into account behavior of the method along the iteration points. Preliminary results of computational experiments confirm efficiency of the proposed modification.Comment: 11 page

    Nondifferentiable Optimization Promotes Health Care

    Get PDF
    An example of a health resource allocation model, solved previously by piecewise linear approximation with data from Devon, U.K., is solved using nondifferentiable optimization (NDO). The example illustrates a new application for NDO, and the novel approach makes clearer the workings of the model

    A Unified Successive Pseudo-Convex Approximation Framework

    Get PDF
    In this paper, we propose a successive pseudo-convex approximation algorithm to efficiently compute stationary points for a large class of possibly nonconvex optimization problems. The stationary points are obtained by solving a sequence of successively refined approximate problems, each of which is much easier to solve than the original problem. To achieve convergence, the approximate problem only needs to exhibit a weak form of convexity, namely, pseudo-convexity. We show that the proposed framework not only includes as special cases a number of existing methods, for example, the gradient method and the Jacobi algorithm, but also leads to new algorithms which enjoy easier implementation and faster convergence speed. We also propose a novel line search method for nondifferentiable optimization problems, which is carried out over a properly constructed differentiable function with the benefit of a simplified implementation as compared to state-of-the-art line search techniques that directly operate on the original nondifferentiable objective function. The advantages of the proposed algorithm are shown, both theoretically and numerically, by several example applications, namely, MIMO broadcast channel capacity computation, energy efficiency maximization in massive MIMO systems and LASSO in sparse signal recovery.Comment: submitted to IEEE Transactions on Signal Processing; original title: A Novel Iterative Convex Approximation Metho

    Optimization of shallow arches against instability using sensitivity derivatives

    Get PDF
    The author discusses the problem of optimization of shallow frame structures which involve a coupling of axial and bending responses. A shallow arch of a given shape and of given weight is optimized such that its limit point load is maximized. The cross-sectional area, A(x) and the moment of inertia, I(x) of the arch obey the relationship I(x) = rho A(x) sup n, n = 1,2,3 and rho is a specified constant. Analysis of the arch for its limit point calculation involves a geometric nonlinear analysis which is performed using a corotational formulation. The optimization is carried out using a second-order projected Lagrangian algorithm and the sensitivity derivatives of the critical load parameter with respect to the areas of the finite elements of the arch are calculated using implicit differentation. Results are presented for an arch of a specified rise to span ratio under two different loadings and the limitations of the approach for the intermediate rise arches are addressed

    Nondifferentiable Optimization with Epsilon Subgradient Methods

    Get PDF
    The development of optimization methods has a significant meaning for systems analysis. Optimization methods provide working tools for quantitative decision making based on correct specification of the problem and appropriately chosen solution methods. Not all problems of systems analysis are optimization problems, of course, but in any systems problem optimization methods are useful and important tools. The power of these methods and their ability to handle different problems makes it possible to analize and construct very complicated systems. Economic planning for instance would be much more limited without linear programming techniques which are very specific optimization methods. LP methods had a great impact on the theory and practice of systems analysis not only as a computing aid but also in providing a general model or structure for the systems problems. LP techniques, however, are not the only possible optimization methods. The consideration of uncertainty, partial knowledge of the systems structure and characteristics, conflicting goals and unknown exogenous models and consequently more sophisticated methods to work with these models. Nondifferentiable optimization methods seem better suited to handle these features than other techniques at the present time. The theory of nondifferentiable optimization studies extremum problems of complex structure involving interactions of subproblems, stochastic factors, multi-stage decisions and other difficulties. This publication covers one particular, but unfortunately common, situation when an estimation of the outcome from some definite decision needs a solution of a difficult auxiliary, internal, extremum problem. Solution of this auxiliary problem may be very time-consuming and so may hinder the wide analysis of different decisions. The aim of the author is to develop methods of optimal decision making which avoid direct comparison of different decisions and use only easily accessible information from the computational point of view

    Methods of Nondifferentiable and Stochastic Optimization and Their Applications

    Get PDF
    Optimization methods are of a great practical importance in systems analysis. They allow us to find the best behavior of a system, determine the optimal structure and compute the optimal parameters of the control system etc. The development of nondifferentiable optimization, differentiable and nondifferentiable stochastic optimization allows us to state and effectively solve new complex optimization problems which were impossible to solve by classical optimization methods. The main purpose of this article is to review briefly some important applications of nondifferentiable and stochastic optimization and to characterize principal directions of research. Clearly, the interests of the author have influenced the content of this article

    Asymptotic Behavior of Statistical Estimators and of Optimal Solutions of Stochastic Optimization Problems, II

    Get PDF
    This paper supplements the results of a new statistical approach to the problem of incomplete information in stochastic programming. The tools of nondifferentiable optimization used here, help to prove the consistency and asymptotic normality of (approximate) optimal solutions without unnatural smoothness assumptions. This allows the theory to take into account the presence of constraints

    Total variation regularization of multi-material topology optimization

    Get PDF
    This work is concerned with the determination of the diffusion coefficient from distributed data of the state. This problem is related to homogenization theory on the one hand and to regularization theory on the other hand. An approach is proposed which involves total variation regularization combined with a suitably chosen cost functional that promotes the diffusion coefficient assuming prespecified values at each point of the domain. The main difficulty lies in the delicate functional-analytic structure of the resulting nondifferentiable optimization problem with pointwise constraints for functions of bounded variation, which makes the derivation of useful pointwise optimality conditions challenging. To cope with this difficulty, a novel reparametrization technique is introduced. Numerical examples using a regularized semismooth Newton method illustrate the structure of the obtained diffusion coefficient.
    • …
    corecore