797 research outputs found

    Scaling Sparse Matrices for Optimization Algorithms

    Get PDF
    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performance of a large scale interior point algorithm significantly. We saw a factor of 10*3* improvements in condition number estimates. In this paper, given our experience with optimization problems from a variety of application backgrounds like economics, finance, engineering, planning etc., we examine the theoretical basis for scaling while solving the linear systems. Our goal is to develop reasonably "good" scaling schemes with sound theoretical basis. We introduce concepts and define "good" scaling schemes in section (1), as well as explain related work in this area. Scaling has been studied extensively and though there is a broad agreement on its importance, the same cannot be said about what constitutes good scaling. A theoretical framework to scale an m x n real matrix is established in section (2). We use the first order conditions associated with the Euclidean metric to develop iterative schemes in section (2.3) that approximate solution in O(mn) time for real matrice. We discuss symmetry preserving scale factors for an n x n symmetric matrix in (3). The importance of symmetry preservation is discussed in section (3.1). An algorithm to directly compute symmetry preserving scale factors in O(n2) time based on Euclidean metric is presented in section (3.4) We also suggest scaling schemes based on rectilinear norm in section (2.4). Though all p-norms are theoretically equivalent, the importance of outliers increases as p increases. For barrier methods, due to large diagnal corrections, we believe that the taxicab metric (p = 1) may be more appropriate. We develop a linear programming model for it and look at a "reduced" dual that can be formulated as a minimum cost flow problem on networks. We are investigating algorithms to solve it in O(mn) time that we require for an efficient scaling procedure. We hope that in future special structure of the "reduced" dual could be exploited to solve it quickly. The dual information can then be used to compute the required scale factors. We discuss Manhattan metric for symmetric matrices in section (3.5) and as in the case of real matrices, we are unable to propose an efficient computational scheme for this metric. We look at a linearized ideal penalty function that only uses deviations out of the desired range in section (2.5). If we could use such a metric to generate an efficient solution, then we would like to see impact of changing the range on the numerical behavior.

    Scaling Sparse Constrained Nonlinear Problems for Iterative Solvers

    Get PDF
    We look at scaling a nonlinear optimization problem for iterative solvers that use at least first derivatives. These derivatives are either computed analytically or by differncing. We ignore iterative methods that are based on function evaluations only and that do not use any derivative information. We also exclude methods where the full problem structure is unknown like variants of delayed column generation. We look at related work in section (1). Despite its importance as evidenced in widely used implementations of nonlinear programming algorithms, scaling has not received enough attention from a theoretical point of view. What do we mean by scaling a nonlinear problem itself is not very clear. In this paper we attempt a scaling framework definition. We start with a description of a nonlinear problem in section (2). Various authors prefer different forms, but all forms can be converted to the form we show. We then describe our scaling framework in section (3). We show the equivalence between the original problem and the scaled problem. The correctness results of section (3.3) play an important role in the dynamic scaling scheme suggested. In section (4), we develop a prototypical algorithm that can be used to represent a variety of iterative solution methods. Using this we examine the impact of scaling in section (5). In the last section (6), we look at what the goal should be for an ideal scaling scheme and make some implementation suggestions for nonlinear solvers.

    A simplified implementation of the least squares solution for pairwise comparisons matrices

    Get PDF
    This is a follow up to "Solution of the least squares method problem of pairwise comparisons matrix" by Bozóki published by this journal in 2008. Familiarity with this paper is essential and assumed. For lower inconsistency and decreased accuracy, our proposed solutions run in seconds instead of days. As such, they may be useful for researchers willing to use the least squares method (LSM) instead of the geometric means (GM) method

    Testing of linear models for optimal control of second-order dynamical system based on model-reality differences

    Get PDF
    In this paper, the testing of linear models with different parameter values is conducted for solving the optimal control problem of a second-order dynamical system. The purpose of this testing is to provide the solution with the same structure but different parameter values in the model used. For doing so, the adjusted parameters are added to each model in order to measure the differences between the model used and the plant dynamics. On this basis, an expanded optimal control problem, which combines system optimization and parameter estimation, is introduced. Then, the Hamiltonian function is defined and a set of the necessary conditions is derived. Consequently, a modified model-based optimal control problem has resulted. Follow from this, an equivalent optimization problem without constraints is formulated. During the calculation procedure, the conjugate gradient algorithm is employed to solve the optimization problem, in turn, to update the adjusted parameters repeatedly for obtaining the optimal solution of the model used. Within a given tolerance, the iterative solution of the model used approximates the correct optimal solution of the original linear optimal control problem despite model-reality differences. The results obtained show the applicability of models with the same structures and different parameter values for solving the original linear optimal control problem. In conclusion, the efficiency of the approach proposed is highly verified

    Example of function optimization via hybrid computation

    Full text link
    Iterative techniques for function optimization have been considered extensivezy for use in all-digital computation. Relatively little has been done to take advantage of the much higher integration speed of hybrid computation systems. This paper demonstrates application of one simple procedure in a hybrid envi ronment and compares the results to those obtained by an efficient digital procedure. Even though a much more efficient procedure was used on the digital, time-saving factors between 8 and 2 were obtained via the simpler hybrid implementation. Since the dollar cost of the hybrid is much less than that of the digital, the hybrid has a large advantage per solution.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/68324/2/10.1177_003754977302100204.pd

    Can Spreadsheet Solvers Solve Demanding Optimization Problems?

    Get PDF
    Practicing engineers resort to modular simulators or to algebraic tools such as GAMS or AMPL for performing complex process optimizations. These tools, however, have a significant learning curve unless they have been introduced at the undergraduate level beforehand. In this work we show how the Solver feature of the Excel spreadsheet can be used for the optimization of a fairly complex system, i.e., a classic solvent extraction/pollution prevention with heat integration process. The specific goal was the design optimization for continuous recovery of organic solvents (VOCs) using a gas absorption tower with solvent recovery in a stripper. It is shown that the Solver feature of the Excel spreadsheet can be used to converge on local optima for these complex systems, provided proper care is taken in the solution procedure. The complexities of optimization can also be demonstrated with this tool, as can several common pitfalls encountered during optimization

    Multi-item capacitated lot-sizing problems with setup times and pricing decisions

    Full text link
    We study a multi-item capacitated lot-sizing problem with setup times and pricing (CLSTP) over a finite and discrete planning horizon. In this class of problems, the demand for each independent item in each time period is affected by pricing decisions. The corresponding demands are then satisfied through production in a single capacitated facility or from inventory, and the goal is to set prices and determine a production plan that maximizes total profit. In contrast with many traditional lot-sizing problems with fixed demands, we cannot, without loss of generality, restrict ourselves to instances without initial inventories, which greatly complicates the analysis of the CLSTP. We develop two alternative Dantzig–Wolfe decomposition formulations of the problem, and propose to solve their relaxations using column generation and the overall problem using branch-and-price. The associated pricing problem is studied under both dynamic and static pricing strategies. Through a computational study, we analyze both the efficacy of our algorithms and the benefits of allowing item prices to vary over time. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2010Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65027/1/20394_ftp.pd
    corecore