146 research outputs found

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Identity maintenance and adaptation : multilevel analysis of response to loss

    Get PDF
    Series title and numbering from publisher's list.Includes bibliographical references (p. [1]-[3]).Fellowship support provided by the Organization Studies dept. of the Sloan School of Management, research sponsored by the International Motor Vehicle Project.Steven A. [sic] Freeman

    The Case for Banning (and Mandating) Ransomware Insurance

    Get PDF
    Ransomware attacks are becoming increasingly pervasive and disruptive. Not only are they shutting down (or at least “holding up”) businesses and local governments all around the country, they are disrupting institutions in many sectors of the U.S. economy — from school systems, to medical facilities, to critical elements of the U.S. energy infrastructure as well as the food supply chain. Ransomware attacks are also growing more frequent and the ransom demands more exorbitant. Those ransom payments are increasingly being covered by insurance. That insurance offers coverage for a variety of cyber-related losses, including many of the costs arising out of ransomware attacks, such as the costs of hiring expert negotiators, the costs of recovering data from backups, the legal liabilities for exposing sensitive customer information, and the ransom payments themselves. Some commentators have expressed concern with this market phenomenon. Specifically, the concern is that the presence of insurance is making the ransomware problem worse, on the following theory: Because there is ransomware insurance that covers ransom payments, and because paying the ransom is often far cheaper than paying the restoration costs and business interruption costs also covered under the policy, there is an increased tendency to pay the ransom — and a willingness to pay higher amounts. This fact, known by the criminals, increases their incentive to engage in ransomware attacks in the first place. And the demand for insurance increases; and the cycle continues. This Article demonstrates that the picture is not as simple as thi story would suggest. Insurance offers a variety of pre-breach and post-breach services that are aimed at reducing the likelihood and severity of a ransomware attack. Thus, over the long-term, cyber insurance has the potential to lower ransomware-related costs. But we are not there yet. This Article discusses ways to help ensure that ransomware insurance is a force for good. Among our suggestions are a limited ban on indemnity for ransomware payments with exceptions for cases involving threats to life and limb, coupled with a mandate that property/casualty insurers provide coverage for the other costs of ransomware attacks. We also explain how a government regulator could serve a coordinating function to help cyber insurers internalize the externalities associated with the insurers’ decisions to reimburse ransomware payments, a role that is played by reinsurers in the context of Kidnap-and-ransom insurance

    The Case for Banning (and Mandating) Ransomware Insurance

    Get PDF
    Ransomware attacks are becoming increasingly pervasive and disruptive, resulting in ransom demands becoming more exorbitant. Payments for ransom costs are increasingly being covered by insurance, which may offer coverage for a variety of cyber-related losses. Some commentators have expressed concern over this market phenomenon. Specifically, the concern is that the presence of insurance is making the ransomware problem worse based on the following theory: because there is ransomware insurance that covers ransom payments, and because paying the ransom is often far cheaper than paying the restoration and business interruption costs covered under the policy, there is an increased tendency to pay the ransom—and a willingness to pay higher amounts. This fact, known by the criminals, increases their incentive to engage in ransomware attacks, which increases the demand for insurance. And the cycle continues. This Article demonstrates that the picture is not as simple as this story would suggest. Insurance offers a variety of pre-breach and postbreach services that are aimed at reducing the likelihood and severity of a ransomware attack. Thus, over the long-term, cyber insurance has the potential to lower ransomware-related costs, even without government intervention. As recent research has shown, however, insurers have not yet fully embraced their potential role as ex ante and ex post regulators of cyber risk—a role for which they are especially well-suited. This Article discusses reasons why that might be the case and offers suggestions for how government intervention may help. Among these suggestions is a limited ban on indemnity for ransomware payments with exceptions for cases involving threats to life and limb, which would be an expanded version of what is already in place with the Office of Foreign Assets Control’s (“OFAC”) sanctions program. We also explain how a government regulator, such as the OFAC, could serve a coordinating function to help cyber insurers internalize the externalities associated with the insurers’ decisions to reimburse ransomware payments—a role that is played by reinsurers in the context of kidnap-and-ransom insurance. Finally, we consider the idea of a federal mandate requiring property and casualty insurers to provide coverage for the costs of ransomware attacks but exclude coverage for the ransomware payments

    Unveiling AI Aversion: Understanding Antecedents and Task Complexity Effects

    Get PDF
    Artificial Intelligence (AI) has generated significant interest due to its potential to augment human intelligence. However, user attitudes towards AI are diverse, with some individuals embracing it enthusiastically while others harbor concerns and actively avoid its use. This two essays\u27 dissertation explores the reasons behind user aversion to AI. In the first essay, I develop a concise research model to explain users\u27 AI aversion based on the theory of effective use and the adaptive structuration theory. I then employ an online experiment to test my hypotheses empirically. The multigroup analysis by Structural Equation Modeling shows that users\u27 perceptions of human dissimilarity, AI bias, and social influence strongly drive AI aversion. Moreover, I find a significant difference between the simple and the complex task groups. This study reveals why users avert using AI by systematically examining the factors related to technology, user, task, and environment, thus making a significant contribution to the emerging field of AI aversion research. Next, while trust and distrust have been recognized as influential factors shaping users\u27 attitudes towards IT artifacts, their intricate relationship with task characteristics and their impact on AI aversion remains largely unexplored. In my second essay, I conduct an online randomized controlled experiment on Amazon Mechanical Turk to bridge this critical research gap. My comprehensive analytic approach, including structural equation modeling (SEM), ANOVA, and PROCESS conditional analysis, allowed me to shed light on the intricate web of factors influencing users\u27 AI aversion. I discovered that distrust and trust mediate between task complexity and AI aversion. Moreover, this study unveiled intriguing differences in these mediated relationships between subjective and objective task groups. Specifically, my findings demonstrate that, for objective tasks, task complexity can significantly increase aversion by reducing trust and significantly decrease aversion by reducing distrust. In contrast, for subjective tasks, task complexity only significantly increases aversion by enhancing distrust. By considering various task characteristics and recognizing trust and distrust as vital mediators, my research not only pushes the boundaries of the human-AI literature but also significantly contributes to the field of AI aversion

    Negotiations in buyer-seller relationships

    Get PDF
    This research provides a basis for consideration of the nature of inter-personal interaction between buyers and sellers in a marketing context. It brings together the models of business relationship development and negotiations. Modem businesses recognise that some relationships are more profitable than others. As a result, the focus is now on retention of customers, greater openness and closer relationships between organisations and agreements leading towards more mutually beneficial outcomes between partners. This emphasises the strategic importance of inter-personal relationships and, specifically, negotiation behaviour. Indeed, negotiation in marketing is a core competence which is vital to ensuring the longevity of business relationships. Despite the recognition of this, there is very little research into negotiations in the context of relationship marketing. Existing models of negotiation present a range of approaches from the extremes of the highly adversarial and competitive to integration and solution-building between the parties. Outcome success increases in importance to the negotiating parties as relationships develop into partnerships, and resource investment increases. Interpersonal interaction is characterised by exchange of information across a broad range of issues specific to the dyadic relationship. The process and nature of exchange becomes increasingly integrative. One of the significant features of this work is that of its observation and exploration of real and substantive negotiations between buyers and sellers. In order to examine the nature of interactions, this thesis develops and tests a coding mechanism applicable to real-life negotiations, supported by interview and questionnaire instruments. Negotiations have been categorised into Early, Mid and Partner stages of relational development. The findings of analyses indicate distinct patterns of negotiator behaviour at different stages of relational development. This has implications for the development of marketing theory as well as the behavioural stances adopted by individuals engaging in negotiations. Findings can aid decision-making in developing business relationships and also provide a means of recognising individual negotiator competences. This leads to more effectively targeted preparation and planning for interactions as well as skills training and, ultimately, outcome success

    Proceedings of the 17th International Conference on Group Decision and Negotiation

    Get PDF

    Content and Context: Three Essays on Information in Politics

    Get PDF
    This dissertation explores the implications of information asymmetries in three specific political environments: primary campaign speeches; negotiating behavior; and testimony delivered in a congressional hearing. First, dog whistling can dramatically affect the outcome of elections, despite observers never being sure it actually occurred. I build a model that addresses how a whistle operates, and explore implications on candidate competition. I find that whistling lets candidates distinguish themselves from competitors in the minds of voters. Second, political negotiation frequently looks like two sides staring each other down, where neither side wishes to concede, claiming that doing so would incur the wrath of voters. Little theory or evidence exists to explain how voters allocate blame for different outcomes. We conduct a laboratory experiment to investigate how anticipation of blame drives negotiating behavior, and how observers allocate blame. We find that the presence of an observer has little effect on standoff outcomes but appears to shorten the duration of standoffs. Third, while congressional hearings give legislators a national stage on which to score political points by publicly chastising high-level bureaucrats, and gives lobbyists a forum to demonstrate their access and importance to policymakers, less clear is how well hearings serve the purposes of oversight. I address this question through automated text analysis of hearings in the 105th − 112th Congresses. I show that the oversight function of hearings is only effective when it is least likely to be used: when the congressional committee and the bureaucrat agree on polic

    Considering stakeholders’ preferences for scheduling slots in capacity constrained airports

    Get PDF
    Airport slot scheduling has attracted the attention of researchers as a capacity management tool at congested airports. Recent research work has employed multi-objective approaches for scheduling slots at coordinated airports. However, the central question on how to select a commonly accepted airport schedule remains. The various participating stakeholders may have multiple and sometimes conflicting objectives stemming from their decision-making needs. This complex decision environment renders the identification of a commonly accepted solution rather difficult. In this presentation, we propose a multi-criteria decision-making technique that incorporates the priorities and preferences of the stakeholders in order to determine the best compromise solution

    Is Winning Everything? Why Campaign Consultants Operate in the American Political System

    Get PDF
    Is Winning Everything? Why Campaign Professionals Operate in the American Political SystemThere has been a public fascination with campaign consultants for quite some time. Political scientists, though, have paid little attention to them. Existing research shows that these consultants tend to help candidates win higher percentages of the vote. Despite such research, the study of campaign consultants is largely without theory. This dissertation advances the understanding of campaign professionals by systematically examining why consultants operate in the American political system. Using new survey data, I demonstrate that there are two major motivations for why individuals become and remain consultants: financial considerations and the desire to see ideologically preferred candidates elected to public office. With this in mind, how do risk-averse consultants maximize their performance in each area? Theoretically, this dissertation utilizes the Behavioral Theory of the Firm (BTOF) as a way to understand how risk and performance are related. Consultants and consulting firms make decisions based on a variety of factors, including how others in their specialization have recently performed, their aspirations and expectations, and how they have buffered themselves from exogenous shocks in their environment. The findings indicate that consultants deal with four types of risk: potential client electability, opponent quality, potential client résumé strength, and financial considerations; BTOF does a very good job explaining the first three. After examining the determinants of risk, I test BTOF as a predictor of consultant revenue and consulting firm winning percentage, with the latter using a second new data set. The theory performs well, indicating that increased risk tends to lead to greater performance in both areas. This dissertation demonstrates the portability of BTOF into the elections literature and provides a unique look into the world of a rarely examined political group
    • 

    corecore