104 research outputs found

    Decomposition Strategies for Constructive Preference Elicitation

    Full text link
    We tackle the problem of constructive preference elicitation, that is the problem of learning user preferences over very large decision problems, involving a combinatorial space of possible outcomes. In this setting, the suggested configuration is synthesized on-the-fly by solving a constrained optimization problem, while the preferences are learned itera tively by interacting with the user. Previous work has shown that Coactive Learning is a suitable method for learning user preferences in constructive scenarios. In Coactive Learning the user provides feedback to the algorithm in the form of an improvement to a suggested configuration. When the problem involves many decision variables and constraints, this type of interaction poses a significant cognitive burden on the user. We propose a decomposition technique for large preference-based decision problems relying exclusively on inference and feedback over partial configurations. This has the clear advantage of drastically reducing the user cognitive load. Additionally, part-wise inference can be (up to exponentially) less computationally demanding than inference over full configurations. We discuss the theoretical implications of working with parts and present promising empirical results on one synthetic and two realistic constructive problems.Comment: Accepted at the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18

    Decision-Making with Belief Functions: a Review

    Get PDF
    Approaches to decision-making under uncertainty in the belief function framework are reviewed. Most methods are shown to blend criteria for decision under ignorance with the maximum expected utility principle of Bayesian decision theory. A distinction is made between methods that construct a complete preference relation among acts, and those that allow incomparability of some acts due to lack of information. Methods developed in the imprecise probability framework are applicable in the Dempster-Shafer context and are also reviewed. Shafer's constructive decision theory, which substitutes the notion of goal for that of utility, is described and contrasted with other approaches. The paper ends by pointing out the need to carry out deeper investigation of fundamental issues related to decision-making with belief functions and to assess the descriptive, normative and prescriptive values of the different approaches

    A Study in Preference Elicitation under Uncertainty

    Get PDF
    In many areas of Artificial Intelligence (AI), we are interested in helping people make better decisions. This help can result in two advantages. First, computers can process large amounts of data and perform quick calculations, leading to better decisions. Second, if a user does not have to think about some decisions, they have more time to focus on other things they find important. Since users' preferences are private, in order to make intelligent decisions, we need to elicit an accurate model of the users' preferences for different outcomes. We are specifically interested in outcomes involving a degree of risk or uncertainty. A common goal in AI preference elicitation is minimizing regret, or loss of utility. We are often interested in minimax regret, or minimizing the worst-case regret. This thesis examines three important aspects of preference elicitation and minimax regret. First, the standard elicitation process in AI assumes users' preferences follow the axioms of Expected Utility Theory (EUT). However, there is strong evidence from psychology that people may systematically deviate from EUT. Cumulative prospect theory (CPT) is an alternative model to expected utility theory which has been shown empirically to better explain humans' decision-making in risky settings. We show that the standard elicitation process can be incompatible with CPT. We develop a new elicitation process that is compatible with both CPT and minimax regret. Second, since minimax regret focuses on the worst-case regret, minimax regret is often an overly cautious estimate of the actual regret. As a result, using minimax regret can often create an unnecessarily long elicitation process. We create a new measure of regret that can be a more accurate estimate of the actual regret. Our measurement of regret is especially well suited for eliciting preferences from multiple users. Finally, we examine issues of multiattribute preferences. Multiattribute preferences provide a natural way for people to reason about preferences. Unfortunately, in the worst-case, the complexity of a user's preferences grows exponentially with respect to the number of attributes. Several models have been proposed to help create compact representations of multiattribute preferences. We compare both the worst-case and average-case relative compactness

    On the decomposition of Generalized Additive Independence models

    Full text link
    The GAI (Generalized Additive Independence) model proposed by Fishburn is a generalization of the additive utility model, which need not satisfy mutual preferential independence. Its great generality makes however its application and study difficult. We consider a significant subclass of GAI models, namely the discrete 2-additive GAI models, and provide for this class a decomposition into nonnegative monotone terms. This decomposition allows a reduction from exponential to quadratic complexity in any optimization problem involving discrete 2-additive models, making them usable in practice
    • …
    corecore