41,672 research outputs found

    Fair social decision under uncertainty and belief disagreements

    Get PDF
    This paper aims to address two issues related to simultaneous aggregation of utilities and beliefs. The first one is related to how to integrate both inequality and uncertainty considerations into social decision making. The second one is related to how social decision should take disagreements in beliefs into account. To accomplish this, whereas individuals are assumed to abide by Savage model’s of subjective expected utility, society is assumed to prescribe, either to each individual when the ex ante individual well-being is favored or to itself when the ex post individual well-being is favored, acting in accordance with the maximin expected utility theory of Gilboa and Schmeidler (J Math Econ 18:141–153, 1989). Furthermore, it adapts an ex ante Pareto-type condition proposed by Gayer et al. (J Legal Stud 43:151–171, 2014), which says that a prospect Pareto dominates another one if the former gives a higher expected utility than the latter one, for each individual, for all individuals’ beliefs. In the context where the ex ante individual welfare is favored, our ex ante Pareto-type condition is shown to be equivalent to social utility taking the form of a MaxMinMin social welfare function, as well as to the individual set of priors being contained within the range of individual beliefs. However, when the ex post individual welfare is favored, the same Pareto-type condition is shown to be equivalent to social utility taking the form of a MaxMinMin social welfare function, as well as to the social set of priors containing only weighted averages of individual beliefs

    Q-Strategy: A Bidding Strategy for Market-Based Allocation of Grid Services

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational services is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic service provisioning and usage of Grid services, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a bidding agent framework for implementing artificial bidding agents, supporting consumers and providers in technical and economic preference elicitation as well as automated bid generation by the requesting and provisioning of Grid services. Secondly, we introduce a novel consumer-side bidding strategy, which enables a goal-oriented and strategic behavior by the generation and submission of consumer service requests and selection of provider offers. Thirdly, we evaluate and compare the Q-strategy, implemented within the presented framework, against the Truth-Telling bidding strategy in three mechanisms – a centralized CDA, a decentralized on-line machine scheduling and a FIFO-scheduling mechanisms

    Intrusiveness, Trust and Argumentation: Using Automated Negotiation to Inhibit the Transmission of Disruptive Information

    No full text
    The question of how to promote the growth and diffusion of information has been extensively addressed by a wide research community. A common assumption underpinning most studies is that the information to be transmitted is useful and of high quality. In this paper, we endorse a complementary perspective. We investigate how the growth and diffusion of high quality information can be managed and maximized by preventing, dampening and minimizing the diffusion of low quality, unwanted information. To this end, we focus on the conflict between pervasive computing environments and the joint activities undertaken in parallel local social contexts. When technologies for distributed activities (e.g. mobile technology) develop, both artifacts and services that enable people to participate in non-local contexts are likely to intrude on local situations. As a mechanism for minimizing the intrusion of the technology, we develop a computational model of argumentation-based negotiation among autonomous agents. A key component in the model is played by trust: what arguments are used and how they are evaluated depend on how trustworthy the agents judge one another. To gain an insight into the implications of the model, we conduct a number of virtual experiments. Results enable us to explore how intrusiveness is affected by trust, the negotiation network and the agents' abilities of conducting argumentation

    Learning the Preferences of Ignorant, Inconsistent Agents

    Full text link
    An important use of machine learning is to learn what people value. What posts or photos should a user be shown? Which jobs or activities would a person find rewarding? In each case, observations of people's past choices can inform our inferences about their likes and preferences. If we assume that choices are approximately optimal according to some utility function, we can treat preference inference as Bayesian inverse planning. That is, given a prior on utility functions and some observed choices, we invert an optimal decision-making process to infer a posterior distribution on utility functions. However, people often deviate from approximate optimality. They have false beliefs, their planning is sub-optimal, and their choices may be temporally inconsistent due to hyperbolic discounting and other biases. We demonstrate how to incorporate these deviations into algorithms for preference inference by constructing generative models of planning for agents who are subject to false beliefs and time inconsistency. We explore the inferences these models make about preferences, beliefs, and biases. We present a behavioral experiment in which human subjects perform preference inference given the same observations of choices as our model. Results show that human subjects (like our model) explain choices in terms of systematic deviations from optimal behavior and suggest that they take such deviations into account when inferring preferences.Comment: AAAI 201
    • …
    corecore