4 research outputs found

    Scheduling with Structured Preferences

    Get PDF

    A Scheduling Tool for Conditionally Independent Temporal Preferences

    Get PDF

    Decomposed Utility Functions and Graphical Models for Reasoning about Preferences

    No full text
    Recently, Brafman and Engel (2009) proposed new concepts of marginal and conditional utility that obey additive analogues of the chain rule and Bayes rule, which they employed to obtain a directed graphical model of utility functions that resembles Bayes nets. In this paper we carry this analogy a step farther by showing that the notion of utility independence, built on conditional utility, satisfies identical properties to those of probabilistic independence. This allows us to formalize the construction of graphical models for utility functions, directed and undirected, and place them on the firm foundations of Pearl and Paz's axioms of semi-graphoids. With this strong equivalence in place, we show how algorithms used for probabilistic reasoning such as Belief Propagation (Pearl 1988) can be replicated to reasoning about utilities with the same formal guarantees, and open the way to the adaptation of additional algorithms

    A Study in Preference Elicitation under Uncertainty

    Get PDF
    In many areas of Artificial Intelligence (AI), we are interested in helping people make better decisions. This help can result in two advantages. First, computers can process large amounts of data and perform quick calculations, leading to better decisions. Second, if a user does not have to think about some decisions, they have more time to focus on other things they find important. Since users' preferences are private, in order to make intelligent decisions, we need to elicit an accurate model of the users' preferences for different outcomes. We are specifically interested in outcomes involving a degree of risk or uncertainty. A common goal in AI preference elicitation is minimizing regret, or loss of utility. We are often interested in minimax regret, or minimizing the worst-case regret. This thesis examines three important aspects of preference elicitation and minimax regret. First, the standard elicitation process in AI assumes users' preferences follow the axioms of Expected Utility Theory (EUT). However, there is strong evidence from psychology that people may systematically deviate from EUT. Cumulative prospect theory (CPT) is an alternative model to expected utility theory which has been shown empirically to better explain humans' decision-making in risky settings. We show that the standard elicitation process can be incompatible with CPT. We develop a new elicitation process that is compatible with both CPT and minimax regret. Second, since minimax regret focuses on the worst-case regret, minimax regret is often an overly cautious estimate of the actual regret. As a result, using minimax regret can often create an unnecessarily long elicitation process. We create a new measure of regret that can be a more accurate estimate of the actual regret. Our measurement of regret is especially well suited for eliciting preferences from multiple users. Finally, we examine issues of multiattribute preferences. Multiattribute preferences provide a natural way for people to reason about preferences. Unfortunately, in the worst-case, the complexity of a user's preferences grows exponentially with respect to the number of attributes. Several models have been proposed to help create compact representations of multiattribute preferences. We compare both the worst-case and average-case relative compactness
    corecore