16 research outputs found

    Properly optimal elements in vector optimization with variable ordering structures

    Get PDF
    In this paper, proper optimality concepts in vector optimization with variable ordering structures are introduced for the first time and characterization results via scalarizations are given. New type of scalarizing functionals are presented and their properties are discussed. The scalarization approach suggested in the paper does not require convexity and boundedness conditions

    A Generalization of a Theorem of Arrow, Barankin and Blackwell to a Nonconvex Case

    No full text
    Abstract The paper presents a generalization of a known density theorem of Arrow, Barankin, and Blackwell for properly efficient points defined as support points of sets with respect to monotonically increasing sublinear functions. This result is shown to hold for nonconvex sets of a reflexive Banach space partially ordered by a Bishop-Phelps cone

    A generalization of a theorem of Arrow, Barankin and Blackwell to a nonconvex case

    Get PDF
    The paper presents a generalization of a known density theorem of Arrow, Barankin, and Blackwell for properly efficient points defined as support points of sets with respect to monotonically increasing sublinear functions. This result is shown to hold for nonconvex sets of a partially ordered reflexive Banach space

    On weak subdifferentials, directional derivatives, and radial epiderivatives for nonconvex functions

    Get PDF
    In this paper we study relations between the directional derivatives, the weak subdifferentials, and the radial epiderivatives for nonconvex real-valued functions. We generalize the well-known theorem that represents the directional derivative of a convex function as a pointwise maximum of its subgradients for the nonconvex case. Using the notion of the weak subgradient, we establish conditions that guarantee equality of the directional derivative to the pointwise supremum of weak subgradients of a nonconvex real-valued function. A similar representation is also established for the radial epiderivative of a nonconvex function. Finally the equality between the directional derivatives and the radial epiderivatives for a nonconvex function is proved. An analogue of the well-known theorem on necessary and sufficient conditions for optimality is drawn without any convexity assumptions

    The modified subgradient algorithm based on feasible values

    No full text
    In this article, we continue to study the modified subgradient (MSG) algorithm previously suggested by Gasimov for solving the sharp augmented Lagrangian dual problems. The most important features of this algorithm are those that guarantees a global optimum for a wide class of non-convex optimization problems, generates a strictly increasing sequence of dual values, a property which is not shared by the other subgradient methods and guarantees convergence. The main drawbacks of MSG algorithm, which are typical for many subgradient algorithms, are those that uses an unconstrained global minimum of the augmented Lagrangian function and requires knowing an approximate upper bound of the initial problem to update stepsize parameters. In this study we introduce a new algorithm based on the so-called feasible values and give convergence theorems. The new algorithm does not require to know the optimal value initially and seeks it iteratively beginning with an arbitrary number. It is not necessary to find a global minimum of the augmented Lagrangian for updating the stepsize parameters in the new algorithm. A collection of test problems are used to demonstrate the performance of the new algorithm. © 2009 Taylor & Francis

    A sharp augmented Lagrangian-based method in constrained non-convex optimization

    No full text
    In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature

    An incremental piecewise linear classifier based on polyhedral conic separation

    No full text
    In this paper, a piecewise linear classifier based on polyhedral conic separation is developed. This classifier builds nonlinear boundaries between classes using polyhedral conic functions. Since the number of polyhedral conic functions separating classes is not known a priori, an incremental approach is proposed to build separating functions. These functions are found by minimizing an error function which is nonsmooth and nonconvex. A special procedure is proposed to generate starting points to minimize the error function and this procedure is based on the incremental approach. The discrete gradient method, which is a derivative-free method for nonsmooth optimization, is applied to minimize the error function starting from those points. The proposed classifier is applied to solve classification problems on 12 publicly available data sets and compared with some mainstream and piecewise linear classifiers. © 2014, The Author(s)

    Preface: Special issue of JOGO MEC EurOPT 2010-Izmir

    No full text

    An Overview of Advances in Combinatorial Optimization Related Topics

    No full text
    This volume of Optimization is devoted to the ECCO XXV Conference, that was held at the Green Palace Hotel in Antalya, Turkey from April 26 to April 28, 2012. The conference attracted nearly 200 delegates from across the OR community in European countries for a three-day meeting of lively discussions and debates. The conference focused on Combinatorial Optimization and on experiences in solving real-world problems, discussing recent advances in theory and applications, reporting on development and implementation of appropriate models and efficient solution methods for combinatorial optimization problems. The conference provided a forum for researchers and practitioners to promote their work on combinatorial optimization to the broader scientific community, identifying challenging research problems for the field, as well as promising research developments both in theory and applications, and promoting interactions with researchers in related research areas
    corecore