5 research outputs found

    Inverse Parametric Optimization For Learning Utility Functions From Optimal and Satisficing Decisions

    Get PDF
    Inverse optimization is a method to determine optimization model parameters from observed decisions. Despite being a learning method, inverse optimization is not part of a data scientist's toolkit in practice, especially as many general-purpose machine learning packages are widely available as an alternative. In this dissertation, we examine and remedy two aspects of inverse optimization that prevent it from becoming more used by practitioners. These aspects include the alternative-based approach in inverse optimization modeling and the assumption that observations should be optimal. In the first part of the dissertation, we position inverse optimization as a learning method in analogy to supervised machine learning. The first part of this dissertation provides a starting point toward identifying the characteristics that make inverse optimization more efficient compared to general out-of-the-box supervised machine learning approaches, focusing on the problem of imputing the objective function of a parametric convex optimization problem. The second part of this dissertation provides an attribute-based perspective to inverse optimization modeling. Inverse attribute-based optimization imputes the importance of the decision attributes that result in minimally suboptimal decisions instead of imputing the importance of decisions. This perspective expands the range of inverse optimization applicability. We demonstrate that it facilitates the application of inverse optimization in assortment optimization, where changing product selections is a defining feature and accurate predictions of demand are essential. Finally, in the third part of the dissertation, we expand inverse parametric optimization to a more general setting where the assumption that the observations are optimal is relaxed to requiring only feasibility. The proposed inverse satisfaction method can deal with both feasible and minimally suboptimal solutions. We mathematically prove that the inverse satisfaction method provides statistically consistent estimates of the unknown parameters and can learn from both optimal and feasible decisions

    Développement d'une technique d'acquisition de contraintes basée sur le nombre de solutions

    Get PDF
    Plusieurs paradigmes de programmation existent pour aider à résoudre des problèmes d'optimisation combinatoire, l'un d'entre eux étant la programmation par contraintes. L'idée de ce paradigme consiste à modéliser le problème à résoudre à l'aide de contraintes, c'est-à-dire des déclarations qui forcent les variables du problème à respecter une relation mathématique. Les contraintes des problèmes ont habituellement des paramètres qui permettent de préciser la relation mathématique à respecter et des variables de décision qui représentent les variables pour lesquelles la relation mathématique doit s'appliquer. Bien qu'intéressant en soi, la programmation par contraintes peut également s'étendre sur d'autres concepts, notamment la modélisation automatique. L'acquisition ou apprentissage de contraintes consiste à apprendre les différentes contraintes, incluant les valeurs des paramètres, qui peuvent expliquer un ensemble d'exemples fournis. L'apprentissage de contraintes peut être utile dans plusieurs situations, comme l'apprentissage de structures d'horaires d'hôpitaux à l'aide d'anciens exemples d'horaires. L'apprentissage de contraintes est encore un domaine nouveau pour lequel les stratégies doivent encore être adaptées ou développées. Les techniques d'acquisition existantes varient en genre, incluant des méthodes qui créent des solutions artificielles pour interagir avec un utilisateur ou des approches qui se basent sur des analyses mathématiques rigoureuses de solutions pour faire des choix sans jamais communiquer avec l'utilisateur. Dans ce mémoire, nous explorons une nouvelle méthode pour performer l'acquisition de contraintes. Le critère principal de la méthode développée est basé sur le nombre de solutions du modèle considéré et utilise des outils de dénombrement. Notre technique performe bien sur les problèmes essayés et ouvre la porte à une nouvelle manière d'apprivoiser les problèmes d'acquisition de contraintes.Several programming paradigms exist to help solve combinatorial optimization problems, one of them being constraint programming. The idea of this paradigm is to model the problems to solve using constraints, i.e. statements that force the variables of the problem to respect a mathematical relation. The constraints of a problem usually have parameters that allow to specify the mathematical relationship to be respected and decision variables that represent the variables on which the mathematical relationship must be applied. Although interesting in itself, constraint programming can also expand on other concepts, such as the automatisation of the modeling process. Constraints acquisition consists in learning the different constraints, including parameter values, which can explain a set of examples provided. Constraint acquisition can be useful in multiple situations, such as learning structures in schedules for hospitals using old schedules. Constraint learning is still a new area for which strategies still need to be adapted or developed. The existing techniques of acquisition varies widely in style, including methods that create artificial solutions to interact with a user or approaches which are based on complex mathematical analyzes of real solutions to make choices without ever communicating with the user. In this thesis, we explore a new method to perform the acquisition of constraints. The main criterion of the developed method is based on the number of solutions of the considered model and uses tools of model counting. Our technique works well on proven problems and opens the door to a new way of approaching acquisition constraint problems
    corecore