61 research outputs found

    Power Saving Experiments for Large Scale Global Optimization

    Get PDF
    Green computing, an emerging field of research that seeks to reduce excess power consumption in high performance computing (HPC), is gaining popularity among researchers. Research in this field often relies on simulation or only uses a small cluster, typically 8 or 16 nodes, because of the lack of hardware support. In contrast, System G at Virginia Tech is a 2592 processor supercomputer equipped with power aware components suitable for large scale green computing research. DIRECT is a deterministic global optimization algorithm, implemented in the mathematical software package VTDIRECT95. This paper explores the potential energy savings for the parallel implementation of DIRECT, called pVTdirect, when used with a large scale computational biology application, parameter estimation for a budding yeast cell cycle model, on System G. Two power aware approaches for pVTdirect are developed and compared against the CPUSPEED power saving system tool. The results show that knowledge of the parallel workload of the underlying application is beneficial for power management

    Portfolio Decisions with Higher Order Moments

    Get PDF
    In this paper, we address the global optimization of two interesting nonconvex problems in finance. We relax the normality assumption underlying the classical Markowitz mean-variance portfolio optimization model and consider the incorporation of skewness (third moment) and kurtosis (fourth moment). The investor seeks to maximize the expected return and the skewness of the portfolio and minimize its variance and kurtosis, subject to budget and no short selling constraints. In the first model, it is assumed that asset statistics are exact. The second model allows for uncertainty in asset statistics. We consider rival discrete estimates for the mean, variance, skewness and kurtosis of asset returns. A robust optimization framework is adopted to compute the best investment portfolio maximizing return, skewness and minimizing variance, kurtosis, in view of the worst-case asset statistics. In both models, the resulting optimization problems are nonconvex. We introduce a computational procedure for their global optimization.Mean-variance portfolio selection, Robust portfolio selection, Skewness, Kurtosis, Decomposition methods, Polynomial optimization problems

    Adjusting process count on demand for petascale global optimization⋆

    Get PDF
    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed

    Guaranteed Global Deterministic Optimization and Constraint Programming for Complex Dynamic Problems

    No full text
    International audienceIn this article we focus on particular multi-physics (mechanic, magnetic, electronic...) dynamic problems. These problems contain some differential constraints to model dynamic behaviors. The goal is to be able to solve it with guarantee, meaning to get a proof that all constraints are satisfied (without any approximation caused by binary representations or rounding modes from the unit core computing). The idea of getting guarantees on the arithmetic operations has been introduced via Interval Arithmetic. Computers become faster gradually, increasing the rate of operations number computable in one time unit. The results computed are often rounded to the nearest representable values, then the global errors are increasing gradually as well without any control over it

    Solving the median problem with continuous demand on a network

    Get PDF
    Where to locate one or several facilities on a network so as to minimize the expected users-closest facility transportation cost is a problem well studied in the OR literature under the name of median problem. In the median problem users are usually identified with nodes of the network. In many situations, however, such assumption is unrealistic, since users should be better considered to be distributed also along the edges of the transportation network. In this paper we address the median problem with demand distributed along edges and nodes. This leads to a globaloptimization problem, which can be solved to optimality by means of a branch-and-bound with DC bounds. Our computational experience shows that the problem is solved in short time even for large instances.Ministerio de Educación, Cultura y DeporteJunta de AndalucíaEuropean Regional Development Fun

    Optimizing radial basis functions by D.C. programming and its use in direct search for global derivative-free optimization

    Get PDF
    In this paper we address the global optimization of functions subject to bound and linear constraints without using derivatives of the objective function. We investigate the use of derivative-free models based on radial basis functions (RBFs) in the search step of direct-search methods of directional type. We also study the application of algorithms based on difference of convex (d.c.) functions programming to solve the resulting subproblems which consist of the minimization of the RBF models subject to simple bounds on the variables. Extensive numerical results are reported with a test set of bound and linearly constrained problems

    A Global Optimization Algorithm for Generalized Quadratic Programming

    Get PDF
    We present a global optimization algorithm for solving generalized quadratic programming (GQP), that is, nonconvex quadratic programming with nonconvex quadratic constraints. By utilizing a new linearizing technique, the initial nonconvex programming problem (GQP) is reduced to a sequence of relaxation linear programming problems. To improve the computational efficiency of the algorithm, a range reduction technique is employed in the branch and bound procedure. The proposed algorithm is convergent to the global minimum of the (GQP) by means of the subsequent solutions of a series of relaxation linear programming problems. Finally, numerical results show the robustness and effectiveness of the proposed algorithm
    corecore