14,090 research outputs found

    Discrete optimization via simulation with stochastic constraints

    Get PDF
    In this thesis, we first develop a new method called penalty function with memory (PFM). PFM consists of a penalty parameter and a measure of constraint violation and it converts a discrete optimization via simulation (DOvS) problem with stochastic constraints into a series of DOvS problems without stochastic constraints. PFM determines a penalty of a visited solution based on past results of feasibility checks on the solution. Specifically, assuming a minimization problem, a penalty parameter of PFM, namely the penalty sequence, diverges to infinity for an infeasible solution but converges to zero almost surely for any strictly feasible solution under certain conditions. For a feasible solution located on the boundary of feasible and infeasible regions, the sequence converges to zero either with high probability or almost surely. As a result, a DOvS algorithm combined with PFM performs well even when optimal solutions are tight or nearly tight. Second, we design an optimal water quality monitoring network for river systems. The problem is to find the optimal location of a finite number of monitoring devices, minimizing the expected detection time of a contaminant spill event while guaranteeing good detection reliability. When uncertainties in spill and rain events are considered, both the expected detection time and detection reliability need to be estimated by stochastic simulation. This problem is formulated as a stochastic DOvS problem with the objective of minimizing expected detection time and with a stochastic constraint on the detection reliability; and it is solved by a DOvS algorithm combined with PFM. Finally, we improve PFM by combining it with an approximate budget allocation procedure. We revise an existing optimal budget allocation procedure so that it can handle active constraints and satisfy necessary conditions for the convergence of PFM.Ph.D

    On the use of biased-randomized algorithms for solving non-smooth optimization problems

    Get PDF
    Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines

    Time and nodal decomposition with implicit non-anticipativity constraints in dynamic portfolio optimization

    Get PDF
    We propose a decomposition method for the solution of a dynamic portfolio optimization problem which fits the formulation of a multistage stochastic programming problem. The method allows to obtain time and nodal decomposition of the problem in its arborescent formulation applying a discrete version of Pontryagin Maximum Principle. The solution of the decomposed problems is coordinated through a fixed- point weighted iterative scheme. The introduction of an optimization step in the choice of the weights at each iteration allows to solve the original problem in a very efficient way.Stochastic programming, Discrete time optimal control problem, Iterative scheme, Portfolio optimization

    Enabling the “Easy Button” for Broad, Parallel Optimization of Functions Evaluated by Simulation

    Get PDF
    Java Optimization by Simulation (JOBS) is presented: an open-source, object-oriented Java library designed to enable the study, research, and use of optimization for models evaluated by simulation. JOBS includes several novel design features that make it easy for a simulation modeler, without extensive expertise in optimization or parallel computation, to define an optimization model with deterministic and/or stochastic constraints, choose one or more metaheuristics to solve it and run, using massively parallel function evaluation to reduce wall-clock times. JOBS is supported by a new language independent, application programming interface (API) for remote simulation model evaluation and a serverless computing environment to provide massively parallel function evaluation, on demand. Dynamic loop scheduling methods are evaluated in the serverless environment with the opportunity for significant resource contention for master node computing power and network bandwidth. JOBS implements several population-based and single-solution improvement metaheuristics (solvers) for real, discrete, and mixed problems. The object-oriented design is extendible with classes that drastically reduce the amount of code required to implement a new solver and encourage re-use of solvers as building blocks for creating new multi-stage solvers or memetic algorithms
    corecore