236 research outputs found

    The United Kingdom in the European Community: The diplomacy of the UK government towards the Single European Act, 1984-5

    Get PDF
    This dissertation examines the policy making of the United Kingdom towards the Single European Act (SEA) from June 1984 to December 1985. The SEA codified the practice of foreign policy coordination and began a process of liberalising the Single Market of the European Community (EC). The literature has identified the SEA as an important milestone in the process of European integration. Controversy surrounds the question as to how Margaret Thatcher could sign the SEA but afterwards say she did not like it. This research makes a contribution with a multi archival and multilingual analysis of the UK government’s decision making and diplomacy in the negotiations that lead to the SEA. This dissertation argues that the UK government’s approach to the SEA went through two phases. In the first phase, Thatcher unsuccessfully attempted to lead the EC, in cooperation with Germany and France, into formalising foreign policy coordination. In the second phase, Thatcher withheld her commitment to the ongoing talks until the shape of the SEA had become clear, while the Foreign Secretary and diplomats were negotiating the clauses of the SEA. Using the SEA as a lens makes it possible to comment on the broader theme of Margaret Thatcher’s views on European integration and adds a puzzle piece to the history of the relationship between the UK and the EC

    A New Formalism, Method and Open Issues for Zero-Shot Coordination

    Full text link
    In many coordination problems, independently reasoning humans are able to discover mutually compatible policies. In contrast, independently trained self-play policies are often mutually incompatible. Zero-shot coordination (ZSC) has recently been proposed as a new frontier in multi-agent reinforcement learning to address this fundamental issue. Prior work approaches the ZSC problem by assuming players can agree on a shared learning algorithm but not on labels for actions and observations, and proposes other-play as an optimal solution. However, until now, this "label-free" problem has only been informally defined. We formalize this setting as the label-free coordination (LFC) problem by defining the label-free coordination game. We show that other-play is not an optimal solution to the LFC problem as it fails to consistently break ties between incompatible maximizers of the other-play objective. We introduce an extension of the algorithm, other-play with tie-breaking, and prove that it is optimal in the LFC problem and an equilibrium in the LFC game. Since arbitrary tie-breaking is precisely what the ZSC setting aims to prevent, we conclude that the LFC problem does not reflect the aims of ZSC. To address this, we introduce an alternative informal operationalization of ZSC as a starting point for future work

    Incentivizing honest performative predictions with proper scoring rules

    Full text link
    Proper scoring rules incentivize experts to accurately report beliefs, assuming predictions cannot influence outcomes. We relax this assumption and investigate incentives when predictions are performative, i.e., when they can influence the outcome of the prediction, such as when making public predictions about the stock market. We say a prediction is a fixed point if it accurately reflects the expert's beliefs after that prediction has been made. We show that in this setting, reports maximizing expected score generally do not reflect an expert's beliefs, and we give bounds on the inaccuracy of such reports. We show that, for binary predictions, if the influence of the expert's prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points. However, this is impossible for predictions over more than two outcomes. We also perform numerical simulations in a toy setting, showing that our bounds are tight in some situations and that prediction error is often substantial (greater than 5-10%). Lastly, we discuss alternative notions of optimality, including performative stability, and show that they incentivize reporting fixed points.Comment: Accepted for the 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023

    The Evidentialist's Wager

    Get PDF
    Suppose that an altruistic agent who is uncertain between evidential and causal decision theory finds herself in a situation where these theories give conflicting verdicts. We argue that even if she has significantly higher credence in CDT, she should nevertheless act in accordance with EDT. First, we claim that the appropriate response to normative uncertainty is to hedge one's bets. That is, if the stakes are much higher on one theory than another, and the credences you assign to each of these theories are not very different, then it is appropriate to choose the option that performs best on the high-stakes theory. Second, we show that, given the assumption of altruism, the existence of correlated decision makers will increase the stakes for EDT but leave the stakes for CDT unaffected. Together these two claims imply that whenever there are sufficiently many correlated agents, the appropriate response is to act in accordance with EDT

    Similarity-based cooperative equilibrium

    Full text link
    As machine learning agents act more autonomously in the world, they will increasingly interact with each other. Unfortunately, in many social dilemmas like the one-shot Prisoner's Dilemma, standard game theory predicts that ML agents will fail to cooperate with each other. Prior work has shown that one way to enable cooperative outcomes in the one-shot Prisoner's Dilemma is to make the agents mutually transparent to each other, i.e., to allow them to access one another's source code (Rubinstein 1998, Tennenholtz 2004) -- or weights in the case of ML agents. However, full transparency is often unrealistic, whereas partial transparency is commonplace. Moreover, it is challenging for agents to learn their way to cooperation in the full transparency setting. In this paper, we introduce a more realistic setting in which agents only observe a single number indicating how similar they are to each other. We prove that this allows for the same set of cooperative outcomes as the full transparency setting. We also demonstrate experimentally that cooperation can be learned using simple ML methods.Comment: Published at NeurIPS 2023. 32 pages, 9 figure
    • …
    corecore