1,432 research outputs found

    A nesting conformation of the 5ā€²-bromo-1ā€²,3ā€²-xylyl-18-crown-5Ā·tert-butylammonium hexafluorophosphate complex; the correlation of the structures of crown ether complexes in the solid state and in solution

    Get PDF
    Single-crystal X-ray analysis of the 5ā€²-bromo-1ā€²,3ā€²-xylyl-18-crown-5Ā· tert-butylammonium hexafluorophosphate complex shows that the complex is of the ā€œnestingā€ type in which the cation and the aryl group are on the same face of the macroring and that the macroring has a (ag+a) (agāˆ’a) (ag+a) (agāˆ’a)(ag+a) (agāˆ’a) conformation

    On the Complexity of Optimization over the Standard Simplex

    Get PDF
    We review complexity results for minimizing polynomials over the standard simplex and unit hypercube.In addition, we show that there exists a polynomial time approximation scheme (PTAS) for minimizing Lipschitz continuous functions and functions with uniformly bounded Hessians over the standard simplex.This extends an earlier result by De Klerk, Laurent and Parrilo [A PTAS for the minimization of polynomials of fixed degree over the simplex, Theoretical Computer Science, to appear.]global optimization;standard simplex;PTAS;multivariate Bernstein approximation;semidefinite programming

    Discrete Least-norm Approximation by Nonnegative (Trigonomtric) Polynomials and Rational Functions

    Get PDF
    Polynomials, trigonometric polynomials, and rational functions are widely used for the discrete approximation of functions or simulation models.Often, it is known beforehand, that the underlying unknown function has certain properties, e.g. nonnegative or increasing on a certain region.However, the approximation may not inherit these properties automatically.We present some methodology (using semidefinite programming and results from real algebraic geometry) for least-norm approximation by polynomials, trigonometric polynomials and rational functions that preserve nonnegativity.(trigonometric) polynomials;rational functions;semidefinite programming;regression;(Chebyshev) approximation

    Robust Solutions of Optimization Problems Affected by Uncertain Probabilities

    Get PDF
    In this paper we focus on robust linear optimization problems with uncertainty regions defined by Ćø-divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on Ćø-divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization problems in inventory control or finance that involve terms containing moments of random variables, expected utility, etc. We show that the robust counterpart of a linear optimization problem with Ćø-divergence uncertainty is tractable for most of the choices of Ćø typically considered in the literature. We extend the results to problems that are nonlinear in the optimization variables. Several applications, including an asset pricing example and a numerical multi-item newsvendor example, illustrate the relevance of the proposed approach.robust optimization;Ćø-divergence;goodness-of-fit statistics

    Discrete Least-norm Approximation by Nonnegative (Trigonomtric) Polynomials and Rational Functions

    Get PDF
    Polynomials, trigonometric polynomials, and rational functions are widely used for the discrete approximation of functions or simulation models.Often, it is known beforehand, that the underlying unknown function has certain properties, e.g. nonnegative or increasing on a certain region.However, the approximation may not inherit these properties automatically.We present some methodology (using semidefinite programming and results from real algebraic geometry) for least-norm approximation by polynomials, trigonometric polynomials and rational functions that preserve nonnegativity.

    The impact of the existence of multiple adjustable robust solutions

    Get PDF
    In this note we show that multiple solutions exist for the production-inventory example in the seminal paper on adjustable robust optimization in Ben-Tal et al. (Math Program 99(2):351ā€“376, 2004). All these optimal robust solutions have the same worst-case objective value, but the mean objective values differ up to 21.9 % and for individual realizations this difference can be up to 59.4 %. We show via additional experiments that these differences in performance become negligible when using a folding horizon approach. The aim of this paper is to convince users of adjustable robust optimization to check for existence of multiple solutions. Using the production-inventory example and an illustrative toy example we deduce three important implications of the existence of multiple optimal robust solutions. First, if one neglects this existence of multiple solutions, then one can wrongly conclude that the adjustable robust solution does not outperform the nonadjustable robust solution. Second, even when it is a priori known that the adjustable and nonadjustable robust solutions are equivalent on worst-case objective value, they might still differ on the mean objective value. Third, even if it is known that affine decision rules yield (near) optimal performance in the adjustable robust optimization setting, then still nonlinear decision rules can yield much better mean objective values

    Technical Noteā€”Dual Approach for Two-Stage Robust Nonlinear Optimization

    Get PDF
    Adjustable robust minimization problems where the objective or constraints depend in a convex way on the adjustable variables are generally difficult to solve. In this paper, we reformulate the original adjustable robust nonlinear problem with a polyhedral uncertainty set into an equivalent adjustable robust linear problem, for which all existing approaches for adjustable robust linear problems can be used. The reformulation is obtained by first dualizing over the adjustable variables and then over the uncertain parameters. The polyhedral structure of the uncertainty set then appears in the linear constraints of the dualized problem, and the nonlinear functions of the adjustable variables in the original problem appear in the uncertainty set of the dualized problem. We show how to recover linear decision rules to the original primal problem and how to generate bounds on its optimal objective value

    On the Complexity of Optimization over the Standard Simplex

    Get PDF
    We review complexity results for minimizing polynomials over the standard simplex and unit hypercube.In addition, we show that there exists a polynomial time approximation scheme (PTAS) for minimizing Lipschitz continuous functions and functions with uniformly bounded Hessians over the standard simplex.This extends an earlier result by De Klerk, Laurent and Parrilo [A PTAS for the minimization of polynomials of fixed degree over the simplex, Theoretical Computer Science, to appear.]

    Optimization of Univariate Functions on Bounded Intervals by Interpolation and Semidefinite Programming

    Get PDF
    AMS classifications: 65D05; 65K05; 90C22;
    • ā€¦
    corecore