3 research outputs found

    Optimal social choice functions: A utilitarian view.

    Get PDF
    We adopt a utilitarian perspective on social choice, assuming that agents have (possibly latent) utility functions over some space of alternatives. For many reasons one might consider mechanisms, or social choice functions, that only have access to the ordinal rankings of alternatives by the individual agents rather than their utility functions. In this context, one possible objective for a social choice function is the maximization of (expected) social welfare relative to the information contained in these rankings. We study such optimal social choice functions under three different models, and underscore the important role played by scoring functions. In our worst-case model, no assumptions are made about the underlying distribution and we analyze the worst-case distortion-or degree to which the selected alternative does not maximize social welfare-of optimal social choice functions. In our average-case model, we derive optimal functions under neutral (or impartial culture) distributional models. Finally, a very general learning-theoretic model allows for the computation of optimal social choice functions (i.e., that maximize expected social welfare) under arbitrary, sampleable distributions. In the latter case, we provide both algorithms and sample complexity results for the class of scoring functions, and further validate the approach empirically

    Assessing Regret-based Preference Elicitation with the UTPREF Recommendation System

    No full text
    Product recommendation and decision support systems must generally develop a model of user preferences by querying or otherwise interacting with a user. Recent approaches to elicitation using minimax regret have proven to be very powerful in simulation. In this work, we test both the effectiveness of regret-based elicitation, and user comprehension and acceptance of minimax regret in user studies. We report on a study involving 40 users interacting with the UTPREF Recommendation System, which helps students navigate and find rental accommodation. UTPREF maintains an explicit (but incomplete) generalized additive utility (GAI) model of user preferences, and uses minimax regret for recommendation. We assess the following general questions: How effective is regret-based elicitation in finding optimal or near-optimal products? Do users understand and accept the minimax regret criterion in practice? Do decision-theoretically valid queries for GAI models result in more accurate assessment than simpler, ad hoc queries? On the first two issues, we find that the minimax regret decision criterion is effective, understandable, and intuitively appealing. On the third issue, we find that simple, semantically ambiguous query types perform as well as more demanding, semantically valid queries for GAI models. We also assess the relative difficulty of specific query types

    A Study in Preference Elicitation under Uncertainty

    Get PDF
    In many areas of Artificial Intelligence (AI), we are interested in helping people make better decisions. This help can result in two advantages. First, computers can process large amounts of data and perform quick calculations, leading to better decisions. Second, if a user does not have to think about some decisions, they have more time to focus on other things they find important. Since users' preferences are private, in order to make intelligent decisions, we need to elicit an accurate model of the users' preferences for different outcomes. We are specifically interested in outcomes involving a degree of risk or uncertainty. A common goal in AI preference elicitation is minimizing regret, or loss of utility. We are often interested in minimax regret, or minimizing the worst-case regret. This thesis examines three important aspects of preference elicitation and minimax regret. First, the standard elicitation process in AI assumes users' preferences follow the axioms of Expected Utility Theory (EUT). However, there is strong evidence from psychology that people may systematically deviate from EUT. Cumulative prospect theory (CPT) is an alternative model to expected utility theory which has been shown empirically to better explain humans' decision-making in risky settings. We show that the standard elicitation process can be incompatible with CPT. We develop a new elicitation process that is compatible with both CPT and minimax regret. Second, since minimax regret focuses on the worst-case regret, minimax regret is often an overly cautious estimate of the actual regret. As a result, using minimax regret can often create an unnecessarily long elicitation process. We create a new measure of regret that can be a more accurate estimate of the actual regret. Our measurement of regret is especially well suited for eliciting preferences from multiple users. Finally, we examine issues of multiattribute preferences. Multiattribute preferences provide a natural way for people to reason about preferences. Unfortunately, in the worst-case, the complexity of a user's preferences grows exponentially with respect to the number of attributes. Several models have been proposed to help create compact representations of multiattribute preferences. We compare both the worst-case and average-case relative compactness
    corecore