114 research outputs found

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Modeling of Cellular Automata and Agent-Based Complex Systems

    Full text link
    The term \u27complex systems\u27 may sound terrifying whenever you come across it as it depicts an overall collective structure which indeed can live up to its name; but when you comprehend the system at its fundamental level by stripping to its simpler multiple- interacting individual parts, the insights it provides may be used to describe and understand different problems ranging from atomic particles to the economics of societies and evolution. The simple laws can be used to simulate the behaviors of disparate complex systems. In this thesis, a brief study is done emulating few such complex systems through programming techniques like cellular automata and neural networks. The patterns of complex behavior obtained are also classified respectively along with the help of Conway\u27s game of life; the working of an autonomous and self organizing organism is simulated in a program written to show the complex patterns formed by a virtual ant. Then an important aspect of competition and cooperation among these agents is shown through game theory and dilemmas which throws light on the essence of survival of complex systems. A formal study is also done on the uses of artificial neural networks as associative memories and pattern recognizers

    Proceedings of the 8th MIT/ONR Workshop on C[3] Systems, held at Massachusetts Institute of Technology, Cambridge, Massachusetts, June 24 to 28, 1985

    Get PDF
    "December 1985."Includes bibliographies and index.Office of Naval Research Contract no. ONR/N00014-77-C-0532 NR-041-519edited by Michael Athans and Alexander H. Levis

    Legal Bargaining Theory\u27s New Prospecting Agenda: It May Be Social Science, But Is It News?

    Get PDF
    In the good old days legal bargaining scholarship was based mostly on negotiator war stories exuberantly told. The social-scientific study of the subject did not begin in earnest until the nineteen-seventies. Since then, however, the literature of storytelling has gone into a pronounced eclipse and social-scientific study is now the principal scholarly game in town. This article questions the wisdom of this shift, almost seismic in its proportions, and argues that it is too soon to jump on the social science bandwagon. Discussion focuses on the uses made of the Prospect Theory of Daniel Kahneman and Amos Tversky and the Theory\u27s central concept of Anchoring. Anchoring is the most thoroughly analyzed of the Prospect Theory concepts and difficulties encountered in incorporating it into legal bargaining theory will recur many times over in working with other parts of the Prospect Theory framework. It is an exemplary test case

    A Tax Policy Analysis of Bob Jones University v. United States

    Get PDF
    This Article questions the tax policy model that the Court articulated in Bob Jones University. The authors believe that the Court\u27s recognition of the primacy of IRS rule making is undesirable because the IRS, as an executive agency, is susceptible to the influence of the incumbent administration\u27s policy objectives. Further, even though the life-tenured status of judges insulates the courts from external political pressures, significant problems a real so present in a model in which the courts occupy a primary role in formulating tax policy. In sum, Congress is better suited than either the courts or the IRS to determine tax policy because it is institutionally organized to gather social and economic data, to define policy objectives, and to legislate to achieve these objectives,which often have repercussions beyond the circumstances of a particular case. This Article\u27s criticism of Bob Jones University also extends to the Court\u27s public benefit and common community conscience standards for charitable organizations seeking to qualify for tax exemptions. Although restrictive standards suggest that a tax exemption is a form of government aid, the Court declined to make this holding explicitly. Moreover, the majority\u27s recognition of the IRS\u27s broad rule making authority seems inconsistent with the community conscience standard, for in the exercise of its broad administrative discretion the IRS need not strictly follow this standard. Finally and most significantly, the public benefit and community conscience standards may discourage organizations that provide a healthy diversity of views in a pluralistic society. This Article begins with a general discussion in part IE of the role of the courts in the development of federal tax policy.\u27 A critical analysis of the Bob Jones University decision--focusing upon both the specific tax exemption issue and the Court\u27s general model for tax policy decision making--follows in part III. Part IV concludes the Article with a recommendation that Congress act definitively to take the lead in formulating tax policy

    Scientific, Technical, and Forensic Evidence

    Get PDF
    Materials from the conference on Scientific, Technical, and Forensic Evidence held by UK/CLE in February 2002

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Eye quietness and quiet eye in expert and novice golf performance: an electrooculographic analysis

    Get PDF
    Quiet eye (QE) is the final ocular fixation on the target of an action (e.g., the ball in golf putting). Camerabased eye-tracking studies have consistently found longer QE durations in experts than novices; however, mechanisms underlying QE are not known. To offer a new perspective we examined the feasibility of measuring the QE using electrooculography (EOG) and developed an index to assess ocular activity across time: eye quietness (EQ). Ten expert and ten novice golfers putted 60 balls to a 2.4 m distant hole. Horizontal EOG (2ms resolution) was recorded from two electrodes placed on the outer sides of the eyes. QE duration was measured using a EOG voltage threshold and comprised the sum of the pre-movement and post-movement initiation components. EQ was computed as the standard deviation of the EOG in 0.5 s bins from –4 to +2 s, relative to backswing initiation: lower values indicate less movement of the eyes, hence greater quietness. Finally, we measured club-ball address and swing durations. T-tests showed that total QE did not differ between groups (p = .31); however, experts had marginally shorter pre-movement QE (p = .08) and longer post-movement QE (p < .001) than novices. A group × time ANOVA revealed that experts had less EQ before backswing initiation and greater EQ after backswing initiation (p = .002). QE durations were inversely correlated with EQ from –1.5 to 1 s (rs = –.48 - –.90, ps = .03 - .001). Experts had longer swing durations than novices (p = .01) and, importantly, swing durations correlated positively with post-movement QE (r = .52, p = .02) and negatively with EQ from 0.5 to 1s (r = –.63, p = .003). This study demonstrates the feasibility of measuring ocular activity using EOG and validates EQ as an index of ocular activity. Its findings challenge the dominant perspective on QE and provide new evidence that expert-novice differences in ocular activity may reflect differences in the kinematics of how experts and novices execute skills

    Full Issue

    Get PDF
    • …
    corecore