27,577 research outputs found

    Stochastic convex optimization with multiple objectives

    Get PDF
    Abstract In this paper, we are interested in the development of efficient algorithms for convex optimization problems in the simultaneous presence of multiple objectives and stochasticity in the first-order information. We cast the stochastic multiple objective optimization problem into a constrained optimization problem by choosing one function as the objective and try to bound other objectives by appropriate thresholds. We first examine a two stages exploration-exploitation based algorithm which first approximates the stochastic objectives by sampling and then solves a constrained stochastic optimization problem by projected gradient method. This method attains a suboptimal convergence rate even under strong assumption on the objectives. Our second approach is an efficient primal-dual stochastic algorithm. It leverages on the theory of Lagrangian method in constrained optimization and attains the optimal convergence rate of O(1/ √ T ) in high probability for general Lipschitz continuous objectives

    A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT

    Full text link
    Purpose: Current inverse planning methods for IMRT are limited because they are not designed to explore the trade-offs between the competing objectives between the tumor and normal tissues. Our goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: We developed a hierarchical evolutionary multiobjective algorithm designed to quickly generate a diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the trade-offs in the plans. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. Results: Acceleration techniques implemented on both levels of the hierarchical algorithm resulted in short, practical runtimes for optimizations. The MOEA improvements were evaluated for example prostate cases with one target and two OARs. The modified MOEA dominated 11.3% of plans using a standard genetic algorithm package. By implementing domination advantage and protocol objectives, small diverse populations of clinically acceptable plans that were only dominated 0.2% by the Pareto front could be generated in a fraction of an hour. Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that meet all dosimetric protocol criteria in a feasible amount of time. It optimizes not only beamlet intensities but also objective function parameters on a patient-specific basis

    Markov Decision Processes with Multiple Long-run Average Objectives

    Get PDF
    We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of limit-average functions, for all epsilon>0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, corrects the flaws, and allows us to obtain improved results
    corecore