8 research outputs found

    Bayesian decision support for complex systems with many distributed experts

    Get PDF
    Complex decision support systems often consist of component modules which, encoding the judgements of panels of domain experts, describe a particular sub-domain of the overall system. Ideally these modules need to be pasted together to provide a comprehensive picture of the whole process. The challenge of building such an integrated system is that, whilst the overall qualitative features are common knowledge to all, the explicit forecasts and their associated uncertainties are only expressed individually by each panel, resulting from its own analysis. The structure of the integrated system therefore needs to facilitate the coherent piecing together of these separate evaluations. If such a system is not available there is a serious danger that this might drive decision makers to incoherent and so indefensible policy choices. In this paper we develop a graphically based framework which embeds a set of conditions, consisting of the agreement usually made in practice of certain probability and utility models, that, if satisfied in a given context, are sufficient to ensure the composite system is truly coherent. Furthermore, we develop new message passing algorithms entailing the transmission of expected utility scores between the panels, that enable the uncertainties within each module to be fully accounted for in the evaluation of the available alternatives in these composite systems

    Inverse Parametric Optimization For Learning Utility Functions From Optimal and Satisficing Decisions

    Get PDF
    Inverse optimization is a method to determine optimization model parameters from observed decisions. Despite being a learning method, inverse optimization is not part of a data scientist's toolkit in practice, especially as many general-purpose machine learning packages are widely available as an alternative. In this dissertation, we examine and remedy two aspects of inverse optimization that prevent it from becoming more used by practitioners. These aspects include the alternative-based approach in inverse optimization modeling and the assumption that observations should be optimal. In the first part of the dissertation, we position inverse optimization as a learning method in analogy to supervised machine learning. The first part of this dissertation provides a starting point toward identifying the characteristics that make inverse optimization more efficient compared to general out-of-the-box supervised machine learning approaches, focusing on the problem of imputing the objective function of a parametric convex optimization problem. The second part of this dissertation provides an attribute-based perspective to inverse optimization modeling. Inverse attribute-based optimization imputes the importance of the decision attributes that result in minimally suboptimal decisions instead of imputing the importance of decisions. This perspective expands the range of inverse optimization applicability. We demonstrate that it facilitates the application of inverse optimization in assortment optimization, where changing product selections is a defining feature and accurate predictions of demand are essential. Finally, in the third part of the dissertation, we expand inverse parametric optimization to a more general setting where the assumption that the observations are optimal is relaxed to requiring only feasibility. The proposed inverse satisfaction method can deal with both feasible and minimally suboptimal solutions. We mathematically prove that the inverse satisfaction method provides statistically consistent estimates of the unknown parameters and can learn from both optimal and feasible decisions

    Minimality and comparison of sets of multi-attribute vectors

    Get PDF
    In a decision-making problem, there is often some uncertainty regarding the user preferences. We assume a parameterised utility model, where in each scenario we have a utility function over alternatives, and where each scenario represents a possible user preference model consistent with the input preference information. With a set A of alternatives available to the decision-maker, we can consider the associated utility function, expressing, for each scenario, the maximum utility among the alternatives. We consider two main problems: firstly, finding a minimal subset of A that is equivalent to it, i.e., that has the same utility function. We show that for important classes of preference models, the set of possibly strictly optimal alternatives is the unique minimal equivalent subset. Secondly, we consider how to compare A to another set of alternatives B , where A and B correspond to different initial decision choices. This is closely related to the problem of computing setwise max regret. We derive mathematical results that allow different computational techniques for these problems, using linear programming, and especially, with a novel approach using the extreme points of the epigraph of the utility function

    Preference Elicitation and Generalized Additive Utility

    No full text
    Any automated decision support software must tailor its actions or recommendations to the preferences of different users. Thus it requires some representation of user preferences as well as a means of eliciting or otherwise learning the preferences of the specific user on whose behalf it is acting. While additive preference models offer a compact representation of multiattribute utility functions, and ease of elicitation, they are often overly restrictive. The more flexible generalized additive independence (GAI) model maintains much of the intuitive nature of additive models, but comes at the cost of much more complex elicitation. In this article, we summarize the key contributions of our earlier paper (UAI 2005): (a) the first elaboration of the semantic foundations of GA
    corecore