37 research outputs found

    Boolean Negotiation Games

    No full text
    We propose a new strategic model of negotiation, called Boolean negotiation games. Our model is inspired by Boolean games and the alternating offers model of bargaining. It offers a computationally grounded model for studying properties of negotiation protocols in a qualitative setting. Boolean negotiation games can yield agreements that are more beneficial than stable solutions (Nash equilibria) of the underlying Boolean game.Interactive Intelligenc

    Special Issue on ‘Human Factors and Computational Models in Negotiation'

    No full text
    Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Opponent Modelling in Automated Multi-Issue Negotiation Using Bayesian Learning (extended abstract)

    No full text
    In this paper, we show that it is nonetheless possible to construct an opponent model, i.e. a model of the opponent’s preferences that can be effectively used to improve negotiation outcomes. We provide a generic framework for learning both the preferences associated with issue values as well as the weights that rank the importance of issues to an agent. The main idea is to exploit certain structural features and rationality principles to guide the learning process and focuses the algorithm on the most likely preference profiles of an opponent. We present a learning algorithm based on Bayesian learning techniques that computes the probability that an opponent has a particular preference profile. Our approach can be integrated into various negotiating agents using different strategies.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Accepting Optimally in Automated Negotiation with Incomplete Information (abstract)

    No full text
    Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    On the Effects of Team Size and Communication Load on the Performance in Exploration Games

    No full text
    Exploration games are games where agents (or robots) need to search resources and retrieve these resources. In principle, performance in such games can be improved either by adding more agents or by exchanging more messages. However, both measures are not free of cost and it is important to be able to assess the trade-off between these costs and the potential performance gain. The focus of this paper is on improving our understanding of the performance gain that can be achieved either by adding more agents or by increasing the communication load. Performance gain moreover is studied by taking several other important factors into account such as environment topology and size, resource-redundancy, and task size. Our results suggest that there does not exist a decision function that dominates all other decision functions, i.e. is optimal for all conditions. Instead we find that (i) for different team sizes and communication strategies different agent decision functions perform optimal, and that (ii) optimality of decision functions also depends on environment and task parameters. We also find that it pays off to optimize for environment topologies.Interactive Intelligenc

    Acceptance conditions in automated negotiation

    No full text
    In every negotiation with a deadline, one of the negotiating parties has to accept an offer to avoid a break off. A break off is usually an undesirable outcome for both parties, therefore it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When designing such conditions one is faced with the acceptance dilemma: accepting the current offer may be suboptimal, as better offers may still be presented. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. Motivated by the challenges of bilateral negotiations between automated agents and by the results and insights of the automated negotiating agents competition (ANAC), we classify and compare state-of-the-art generic acceptance conditions. We focus on decoupled acceptance conditions, i.e. conditions that do not depend on the bidding strategy that is used. We performed extensive experiments to compare the performance of acceptance conditions in combination with a broad range of bidding strategies and negotiation domains. Furthermore we propose new acceptance conditions and we demonstrate that they outperform the other conditions that we study. In particular, it is shown that they outperform the standard acceptance condition of comparing the current offer with the offer the agent is ready to send out. We also provide insight in to why some conditions work better than others and investigate correlations between the properties of the negotiation environment and the efficacy of acceptance conditions.MediamaticsElectrical Engineering, Mathematics and Computer Scienc

    Exploring Heuristic Action Selection in Agent Programming (extended abstract)

    Get PDF
    Rational agents programmed in agent programming languages derive their choice of action from their beliefs and goals. One of the main benefits of such programming languages is that they facilitate a highlevel and conceptually elegant specification of agent behaviour. Qualitative concepts alone, however, are not sufficient to specify that this behaviour is also nearly optimal, a quality typically also associated with rational agents. Optimality in this context refers to the costs and rewards associated with action execution. In this paper we extend the agent programming language GOAL with primitives that allow the specification of near-optimal behaviour and illustrate the use of these constructs by extending a GOAL Blocks World agent with various strategies to optimize its behaviour.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Reasoning About Multi-Attribute Preferences (extended abstract)

    No full text
    Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    Active Affordance Learning in Continuous State and Action Spaces

    No full text
    Learning object affordances and manipulation skills is essential for developing cognitive service robots. We propose an active affordance learning approach in continuous state and action spaces without manual discretization of states or exploratory motor primitives. During exploration in the action space, the robot learns a forward model to predict action effects. It simultaneously updates the active exploration policy through reinforcement learning, whereby the prediction error serves as the intrinsic reward. By using the learned forward model, motor skills are obtained in a bottom-up manner to achieve goal states of an object. We demonstrate that a humanoid robot NAO is able to learn how to manipulate garbage cans with different lids by using different motor skills.Intelligent SystemsElectrical Engineering, Mathematics and Computer Scienc

    An argumentation framework for qualitative multi-criteria preferences

    No full text
    Preferences are derived in part from knowledge. Knowledge, however, may be defeasible. We present an argumentation framework for deriving qualitative, multi-attribute preferences and incorporate defeasible reasoning about knowledge. Intuitively, preferences based on defeasible conclusions are not as strong as preferences based on certain conclusions, since defeasible conclusions may turn out not to hold. This introduces risk when such knowledge is used in practical reasoning. Typically, a risk prone attitude will result in different preferences than a risk averse attitude. In this paper we introduce qualitative strategies for deriving risk sensitive preferencesMediamaticsElectrical Engineering, Mathematics and Computer Scienc
    corecore