271,615 research outputs found

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Did the first Covid-19 national lockdown lead to an increase in domestic abuse in London?

    Get PDF
    On March 23rd 2020, the UK, following close behind a number of other countries went into its first national lockdown in a bid to stop the spread of Covid-19. Boris Johnson told people to stay at home and save lives. But what happens when home isn’t safe? This paper uses data from the Metropolitan Police to examine the impact of the first lockdown on domestic abuse in the 32 boroughs of the London Metropolitan area. Using a before and after approach, and controlling for other factors, we show that domestic abuse crimes rose during lockdown. We find this increase is greater for some crimes and populations than others and is consistent across the whole lockdown period. Once lockdown restrictions are eased, rates decline but remain slightly higher than prior to lockdown up to 3 months later

    Bioans: bio-inspired ambient intelligence protocol for wireless sensor networks

    Get PDF
    This paper describes the BioANS (Bio-inspired Autonomic Networked Services) protocol that uses a novel utility-based service selection mechanism to drive autonomicity in sensor networks. Due to the increase in complexity of sensor network applications, self-configuration abilities, in terms of service discovery and automatic negotiation, have become core requirements. Further, as such systems are highly dynamic due to mobility and/or unreliability; runtime self-optimisation and self-healing is required. However the mechanism to implement this must be lightweight due to the sensor nodes being low in resources, and scalable as some applications can require thousands of nodes. BioANS incorporates some characteristics of natural emergent systems and these contribute to its overall stability whilst it remains simple and efficient. We show that not only does the BioANS protocol implement autonomicity in allowing a dynamic network of sensors to continue to function under demanding circumstances, but that the overheads incurred are reasonable. Moreover, state-flapping between requester and provider, message loss and randomness are not only tolerated but utilised to advantage in the new protocol

    Collusion via signaling in open ascending auctions with multiple objects and complementarities

    Get PDF
    Collusive equilibria exist in open ascending auctions with multiple objects, if the number of bidders is sufficiently small relative to the number of objects, even with large complementarities in the buyers' utility functions. The bidders collude by dividing the objects among themselves, while keeping the prices low. Hence the complementarities are not realized

    An Expressive Model for the Web Infrastructure: Definition and Application to the BrowserID SSO System

    Full text link
    The web constitutes a complex infrastructure and as demonstrated by numerous attacks, rigorous analysis of standards and web applications is indispensable. Inspired by successful prior work, in particular the work by Akhawe et al. as well as Bansal et al., in this work we propose a formal model for the web infrastructure. While unlike prior works, which aim at automatic analysis, our model so far is not directly amenable to automation, it is much more comprehensive and accurate with respect to the standards and specifications. As such, it can serve as a solid basis for the analysis of a broad range of standards and applications. As a case study and another important contribution of our work, we use our model to carry out the first rigorous analysis of the BrowserID system (a.k.a. Mozilla Persona), a recently developed complex real-world single sign-on system that employs technologies such as AJAX, cross-document messaging, and HTML5 web storage. Our analysis revealed a number of very critical flaws that could not have been captured in prior models. We propose fixes for the flaws, formally state relevant security properties, and prove that the fixed system in a setting with a so-called secondary identity provider satisfies these security properties in our model. The fixes for the most critical flaws have already been adopted by Mozilla and our findings have been rewarded by the Mozilla Security Bug Bounty Program.Comment: An abridged version appears in S&P 201

    The role of search engine optimization in search marketing

    Get PDF
    This paper examines the impact of search engine optimization (SEO) on the competition between advertisers for organic and sponsored search results. The results show that a positive level of search engine optimization may improve the search engine's ranking quality and thus the satisfaction of its visitors. In the absence of sponsored links, the organic ranking is improved by SEO if and only if the quality provided by a website is sufficiently positively correlated with its valuation for consumers. In the presence of sponsored links, the results are accentuated and hold regardless of the correlation. When sponsored links serve as a second chance to acquire clicks from the search engine, low-quality websites have a reduced incentive to invest in SEO, giving an advantage to their high-quality counterparts. As a result of the high expected quality on the organic side, consumers begin their search with an organic click. Although SEO can improve consumer welfare and the payoff of high-quality sites, we find that the search engine's revenues are typically lower when advertisers spend more on SEO and thus less on sponsored links. Modeling the impact of the minimum bid set by the search engine reveals an inverse U-shaped relationship between the minimum bid and search engine profits, suggesting an optimal minimum bid that is decreasing in the level of SEO activity. © 2013 INFORMS
    • …
    corecore