19 research outputs found

    Robust inverse optimization

    Get PDF
    Given an observation of a decision-maker’s uncertain behavior, we develop a robust inverse optimization model for imputing an objective function that is robust against mis-specifications of the behavior. We characterize the inversely optimized cost vectors for uncertainty sets that may or may not intersect the feasible region, and propose tractable solution methods for special cases. We demonstrate the proposed model in the context of diet recommendation

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches

    Reducing Conservatism in Pareto Robust Optimization

    Get PDF
    Robust optimization (RO) is a sub-field of optimization theory with set-based uncertainty. A criticism of this field is that it determines optimal decisions for only the worst-case realizations of uncertainty. Several methods have been introduced to reduce this conservatism. However, non of these methods can guarantee the non-existence of another solution that improves the optimal solution for all non-worse-cases. Pareto robust optimization ensures that non-worse-case scenarios are accounted for and that the solution cannot be dominated for all scenarios. The problem with Pareto robust optimization (PRO) is that a Pareto robust optimal solution may be improved by another solution for a given subset of uncertainty. Also, Pareto robust optimal solutions are still conservative on the optimality for the worst-case scenario. In this thesis, first, we apply the concept of PRO to the Intensity Modulated Radiation Therapy (IMRT) problem. We will present a Pareto robust optimization model for four types of IMRT problems. Using several hypothetical breast cancer data sets, we show that PRO solutions decrease the side effects of overdosing while delivering the same dose that RO solutions deliver to the organs at risk. Next, we present methods to reduce the conservatism of PRO solutions. We present a method for generating alternative RO solutions for any linear robust optimization problem. We also demonstrate a method for determining if an RO solution is PRO. Then we determine the set of all PRO solutions using this method. We denote this set as the ``Pareto robust frontier" for any linear robust optimization problem. Afterward, we present a set of uncertainty realizations for which a given PRO solution is optimal. Using this approach, we compare all PRO solutions to determine the one that is optimal for the maximum number of realizations in a given set. We denote this solution as a ``superior" PRO solution for that set. At last, we introduce a method to generate a PRO solution while slightly violating the optimality of the optimal solution for the worst-case scenario. We define these solutions as ``light PRO" solutions. We illustrate the application of our approach to the IMRT problem for breast cancer. The numerical results present a significant impact of our method in reducing the side effects of radiation therapy

    Wasserstein Distributionally Robust Inverse Multiobjective Optimization

    Full text link
    Inverse multiobjective optimization provides a general framework for the unsupervised learning task of inferring parameters of a multiobjective decision making problem (DMP), based on a set of observed decisions from the human expert. However, the performance of this framework relies critically on the availability of an accurate DMP, sufficient decisions of high quality, and a parameter space that contains enough information about the DMP. To hedge against the uncertainties in the hypothetical DMP, the data, and the parameter space, we investigate in this paper the distributionally robust approach for inverse multiobjective optimization. Specifically, we leverage the Wasserstein metric to construct a ball centered at the empirical distribution of these decisions. We then formulate a Wasserstein distributionally robust inverse multiobjective optimization problem (WRO-IMOP) that minimizes a worst-case expected loss function, where the worst case is taken over all distributions in the Wasserstein ball. We show that the excess risk of the WRO-IMOP estimator has a sub-linear convergence rate. Furthermore, we propose the semi-infinite reformulations of the WRO-IMOP and develop a cutting-plane algorithm that converges to an approximate solution in finite iterations. Finally, we demonstrate the effectiveness of our method on both a synthetic multiobjective quadratic program and a real world portfolio optimization problem.Comment: 19 page

    Machine Learning and Inverse Optimization for Estimation of Weighting Factors in Multi-Objective Production Scheduling Problems

    Get PDF
    In recent years, scheduling optimization has been utilized in production systems. To construct a suitable mathematical model of a production scheduling problem, modeling techniques that can automatically select an appropriate objective function from historical data are necessary. This paper presents two methods to estimate weighting factors of the objective function in the scheduling problem from historical data, given the information of operation time and setup costs. We propose a machine learning-based method, and an inverse optimization-based method using the input/output data of the scheduling problems when the weighting factors of the objective function are unknown. These two methods are applied to a multi-objective parallel machine scheduling problem and a real-world chemical batch plant scheduling problem. The results of the estimation accuracy evaluation show that the proposed methods for estimating the weighting factors of the objective function are effective

    You Are What You Eat: A Preference-Aware Inverse Optimization Approach

    Full text link
    A key challenge in the emerging field of precision nutrition entails providing diet recommendations that reflect both the (often unknown) dietary preferences of different patient groups and known dietary constraints specified by human experts. Motivated by this challenge, we develop a preference-aware constrained-inference approach in which the objective function of an optimization problem is not pre-specified and can differ across various segments. Among existing methods, clustering models from machine learning are not naturally suited for recovering the constrained optimization problems, whereas constrained inference models such as inverse optimization do not explicitly address non-homogeneity in given datasets. By harnessing the strengths of both clustering and inverse optimization techniques, we develop a novel approach that recovers the utility functions of a constrained optimization process across clusters while providing optimal diet recommendations as cluster representatives. Using a dataset of patients' daily food intakes, we show how our approach generalizes stand-alone clustering and inverse optimization approaches in terms of adherence to dietary guidelines and partitioning observations, respectively. The approach makes diet recommendations by incorporating both patient preferences and expert recommendations for healthier diets, leading to structural improvements in both patient partitioning and nutritional recommendations for each cluster. An appealing feature of our method is its ability to consider infeasible but informative observations for a given set of dietary constraints. The resulting recommendations correspond to a broader range of dietary options, even when they limit unhealthy choices

    Inverse Learning: A Data-driven Framework to Infer Optimizations Models

    Full text link
    We consider the problem of inferring optimal solutions and unknown parameters of a partially-known constrained problem using a set of past decisions. We assume that the constraints of the original optimization problem are known while optimal decisions and the objective are to be inferred. In such situations, the quality of the optimal solution is evaluated in relation to the existing observations and the known parameters of the constrained problem. A method previously used in such settings is inverse optimization. This method can be used to infer the utility functions of a decision-maker and to find optimal solutions based on these inferred parameters indirectly. However, little effort has been made to generalize the inverse optimization methodology to data-driven settings to address the quality of the inferred optimal solutions. In this work, we present a data-driven inverse linear optimization framework (Inverse Learning) that aims to infer the optimal solution to an optimization problem directly based on the observed data and the existing known parameters of the problem. We validate our model on a dataset in the diet recommendation problem setting to find personalized diets for prediabetic patients with hypertension. Our results show that our model obtains optimal personalized daily food intakes that preserve the original data trends while providing a range of options to patients and providers. The results show that our proposed model is able to both capture optimal solutions with minimal perturbation from the given observations and, at the same time, achieve the inherent objectives of the original problem. We show an inherent trade-off in the quality of the inferred solutions with different metrics and provide insights into how a range of optimal solutions can be inferred in constrained environments

    Inverse Integer Optimization With an Application in Recommender Systems

    Get PDF
    In typical (forward) optimization, the goal is to obtain optimal values for the decision variables given known values of optimization model parameters. However, in practice, it may be challenging to determine appropriate values for these parameters. Assuming the availability of historical observations that represent past decisions made by an optimizing agent, the goal of inverse optimization is to impute the unknown model parameters that would make these observations optimal (or approximately optimal) solutions to the forward optimization problem. Inverse optimization has many applications, including geology, healthcare, transportation, and production planning. In this dissertation, we study inverse optimization with integer observation(s), focusing on the cost coefficients as the unknown parameters. Furthermore, we demonstrate an application of inverse optimization to recommender systems. First, we address inverse optimization with a single imperfect integer observation. The aim is to identify the unknown cost vector so that it makes the given imperfect observation approximately optimal by minimizing the optimality error. We develop a cutting plane algorithm for this problem. Results show that the proposed cutting plane algorithm works well for small instances. To reduce computational time, we propose an LP relaxation heuristic method. Furthermore, to obtain an optimal solution in a shorter amount of time, we combine both methods into a hybrid approach by initializing the cutting plane algorithm with a solution from the heuristic method. In the second study, we generalize the previous approach to inverse optimization with multiple imperfect integer observations that are all feasible solutions to one optimization problem. A cutting plane algorithm is proposed and then compared with an LP heuristic method. The results show the value of using multiple data points instead of a single observation. Finally, we apply the proposed methods in the setting of recommender systems. By accessing past user preferences, through inverse optimization we identify the unknown model parameters that minimize an aggregate of the optimality errors over multiple points. Once the unknown parameters are imputed, the recommender system can recommend the best items to the users. The advantage of using inverse optimization is that when users are optimizing their decisions, there is no need to have access to a large amount of data for imputing recommender system model parameters. We demonstrate the accuracy of our approach on a real data set for a restaurant recommender system
    corecore