110 research outputs found

    Impossibility theorems involving weakenings of expansion consistency and resoluteness in voting

    Full text link
    A fundamental principle of individual rational choice is Sen's γ\gamma axiom, also known as expansion consistency, stating that any alternative chosen from each of two menus must be chosen from the union of the menus. Expansion consistency can also be formulated in the setting of social choice. In voting theory, it states that any candidate chosen from two fields of candidates must be chosen from the combined field of candidates. An important special case of the axiom is binary expansion consistency, which states that any candidate chosen from an initial field of candidates and chosen in a head-to-head match with a new candidate must also be chosen when the new candidate is added to the field, thereby ruling out spoiler effects. In this paper, we study the tension between this weakening of expansion consistency and weakenings of resoluteness, an axiom demanding the choice of a single candidate in any election. As is well known, resoluteness is inconsistent with basic fairness conditions on social choice, namely anonymity and neutrality. Here we prove that even significant weakenings of resoluteness, which are consistent with anonymity and neutrality, are inconsistent with binary expansion consistency. The proofs make use of SAT solving, with the correctness of a SAT encoding formally verified in the Lean Theorem Prover, as well as a strategy for generalizing impossibility theorems obtained for special types of voting methods (namely majoritarian and pairwise voting methods) to impossibility theorems for arbitrary voting methods. This proof strategy may be of independent interest for its potential applicability to other impossibility theorems in social choice.Comment: Forthcoming in Mathematical Analyses of Decisions, Voting, and Games, eds. M. A. Jones, D. McCune, and J. Wilson, Contemporary Mathematics, American Mathematical Society, 202

    Metadata Schema x-econ Repository

    Get PDF
    Since May 2017, the x-hub project partners OVGU Magdeburg, University of Vienna, and GESIS dispose of a new repository, called x-econ (https://x-econ.org). The service is dedicated to all experimental economics research projects to disseminate user-friendly archiving and provision of experimental economics research data. The repository x-econ contains all necessary core functionalities of a modern repository and is in a continuous optimization process aiming at functionality enhancement and improvement. x-econ is also one pillar of the multidisciplinary repository x-science (https://x-science.org). The present documentation, which is primarily based on the GESIS Technical Reports on datorium 2014|03 and da|ra 4.0, lists and explains the metadata elements, used to describe research information

    Continuous virtual implementation: Complete information

    Get PDF
    A social choice rule (SCR) is a mapping from preference profiles to lotteries over outcomes. When preference profiles are close to being common knowledge among players, an SCR is continuously virtually fully implementable if there exists a mechanism such that all its equilibrium outcomes are arbitrarily close to the outcomes recommended by the SCR. When there are at least three players and a domain condition is satisfied, we obtain the following result: any SCR is continuously virtually fully implementable in Bayesian Nash equilibria, as well as in interim correlated rationalizable strategies, by a finite mechanism

    Aggregating credences into a belief

    Get PDF
    This thesis proposes a new research topic of how we should aggregate multiple individual credences on logically connected issues into a collective binary belief: heterogeneous belief aggregation. We argue that heterogeneous belief aggregation is worth studying because there are many situations where credences and binary beliefs are more appropriate as inputs and outputs of aggregation procedures, respectively. The main problem is that it is vulnerable to a dilemma like the discursive dilemma or the lottery paradox: issue-wise independent procedures might not ensure deductive closure and consistency. Confronting this situation, we have two main questions: how to formulate and generalize the dilemma, and what kinds of aggregation procedures can avoid the dilemma and obtain rational collective beliefs. To answer the first question, we employ the axiomatic approach to deal with general aggregation procedures as in judgment aggregation and social choice theory. We investigate which kinds of individual and collective rationality requirements and which properties of aggregation procedures should be imposed on heterogeneous belief aggregation, and which of their combinations are impossible. We mainly assume deductive closure rather than completeness, in contrast with most of the judgment aggregation literature. Moreover, we address impossibility results without anonymity conditions, which cannot be considered in belief binarization. This leads to three kinds of impossibility results, and we also determine the sufficient and necessary agenda condition for each of the results. Furthermore, we analyze similarities and differences between our proofs and other related proofs and conclude that the problem of heterogeneous belief aggregation is not reducible to the other related problems. Moreover, we show that our methods can be applied to other similar impossibilities. For the second question, we explore specific heterogeneous belief aggregation procedures and their properties. There can be two kinds of heterogeneous belief aggregation procedures: collective belief binarization combined with a probabilistic opinion pooling method, and direct rules. As for collective belief binarization, belief binarization theories are applicable. To this end, we first analyze the existing threshold-based procedures, especially those that relax the Lockean thesis and preserve rationality. We categorize them as local-threshold rules - where thresholds depend on probability measures - and world-threshold rules - where thresholds are applied not to an issue but to a possible world. Their characteristics are captured by the property of local monotonicity and world monotonicity, respectively. We compare and relate these properties with other existing properties like being stable in the stability theory of belief and with new - to be introduced - properties. Whether some existing rational procedures, like the camera shutter rule, satisfy these properties is an interesting and philosophically important question. We provide geometrical characterizations of some of the properties to answer this question. Furthermore, we propose that convexity norms should be discussed in the context of belief binarization. We introduce various kinds of convexity norms and examine whether the relevant procedures satisfy them. What is more, we propose two novel kinds of belief binarization methods that preserve rationality but are not based on thresholds: distance-based binarization and epistemic-utility-based binarization. The first is a holistic one minimizing the distance from a given probability measure to the resulting binary belief. The second one is based on an accuracy norm minimizing expected inaccuracy. We devise novel ways to measure the required distances and inaccuracies. Moreover, we study distance minimization with Bregman divergence, utility maximization with strict proper scores, and their relationship. Direct heterogeneous belief aggregation rules will also be proposed and studied regarding threshold, distance and epistemic-utility. We provide a new classification and characterization of them. Furthermore, we investigate some norms that are especially relevant in social contexts, such as various unanimity norms and convexity norms interpreted in social contexts, and commutativity norms, which govern the relationship between direct rules and combinations of probabilistic opinion pooling and collective belief binarization. Putting all this together, we conclude that heterogeneous belief aggregation is an philosophically fruitful topic that deserves attention. Heterogeneous belief aggregation can be seen as a general framework, where not only heterogeneous belief aggregation but also probabilistic opinion pooling, judgment aggregation, and belief binarization are studied in connection to each other. First, studying heterogeneous belief aggregation is by itself interesting and cannot be reduced to other research fields: we can deal with different rationality norms in social contexts and address properties characteristic for heterogeneous belief aggregation. Moreover, it is not only the direct rules but also the different possible combinations of methods from different research areas that makes this whole endeavor to be more sum of its parts. Indeed, second, this framework bridges independently developed research areas: first, we can apply well-developed formal theories in formal epistemology like belief binarization theories and epistemic decision theories to the belief aggregation problem. Second, this framework enables us to add social contexts to belief binarization problems and epistemic decision theories, which can be extended to cover also social beliefs. Our theory of heterogeneous belief aggregation can be applied to the (collective) belief binarization problem and epistemic (collective) decision theory. In this way, the thesis fills, or at least narrows, the gap between individual epistemology and collective epistemology.Diese Dissertation schlägt ein neues Forschungsthema dahingehend vor, wie wir mehrere individuelle probabilistische Überzeugungen in Hinblick auf logisch zusammenhängende Propositionen zu einer kollektiven binären Überzeugung aggregieren sollten: heterogene Überzeugungsaggregation. Wir argumentieren, dass heterogene Überzeugungsaggregation eine Untersuchung wert ist, da es viele Situationen gibt, in denen probabilistische Überzeugungen und binäre Überzeugungen plausible und naheliegende Inputs bzw. Outputs von Aggregationsverfahren darstellen. Das Hauptproblem besteht darin, dass heterogene Überzeugungsaggregation anfällig für ein Dilemma wie das diskursive Dilemma oder das Lotterieparadox ist: Propositionsbezogene unabhängige Verfahren können möglicherweise keine deduktive Abgeschlossenheit und Konsistenz gewährleisten. Angesichts dieser Situation haben wir zwei Hauptfragen: Wie das Dilemma präzisiert und verallgemeinert werden könnte und welche Arten von Aggregationsverfahren das Dilemma vermeiden und rationale kollektive Überzeugungen erhalten können. Um die erste Frage zu beantworten, wenden wir den axiomatischen Ansatz an, um allgemeine Aggregationsverfahren wie in der Urteilsaggregation und der Theorie der sozialen Wahl behandeln zu können. Wir untersuchen, welche individuellen und kollektiven Rationalitätsanforderungen und welche Eigenschaften von Aggregationsverfahren der heterogenen Überzeugungsaggregation auferlegt werden sollten und welche ihrer Kombinationen unmöglich sind. Wir gehen hauptsächlich von deduktiver Abgeschlossenheit und nicht von Vollständigkeit aus, anders als in der meisten Literatur zur Urteilsaggregation. Darüber hinaus adressieren wir Unmöglichkeitsergebnisse ohne Anonymitätsbedingungen, die bei der Überzeugungsbinarisierung nicht berücksichtigt werden können. Dies führt zu drei Arten von Unmöglichkeitsergebnissen, und wir bestimmen die notwendige und hinreichende Agenda-Bedingung für jedes der Ergebnisse. Darüber hinaus analysieren wir Ähnlichkeiten und Unterschiede zwischen unseren Beweisen und anderen verwandten Beweisen und kommen zu dem Schluss, dass das Problem der heterogenen Überzeugungsaggregation nicht auf die anderen verwandten Probleme reduziert werden kann. Schließlich zeigen wir dass unsere Methoden auf andere ähnliche Unmöglichkeiten angewendet werden können. Für die zweite Frage untersuchen wir spezifische heterogene Aggregationsverfahren und deren Eigenschaften. Es gibt dabei zwei Arten von heterogenen Aggregationsverfahren: die kollektive Überzeugungsbinarisierung kombiniert mit einer probabilistischen Meinungspooling-Methode und die direkten Regeln. Was die kollektive Überzeugungsbinarisierung betrifft, so sind Theorien der Überzeugungsbinarisierung anwendbar. Dazu analysieren wir zunächst die bestehenden schwellenwertbasierten Verfahren, insbesondere solche, die die Lockesche These lockern und die Rationalität bewahren. Wir kategorisieren sie als lokale Schwellenwertregeln – wobei Schwellenwerte von Wahrscheinlichkeitsmassen abhängen – und Weltschwellenwertregeln – wobei Schwellenwerte nicht auf eine Proposition, sondern auf eine mögliche Welt angewendet werden. Die nämlichen Regeln können mittels der Eigenschaft der lokalen Monotonie bzw. der Weltmonotonie charakterisiert werden. Wir vergleichen und setzen diese Eigenschaften mit anderen bestehenden Eigenschaften wie Stabilität in der Stabilitätstheorie von Überzeugungen und mit neuen - noch einzuführenden - Eigenschaften in Beziehung. Ob einige bestehende rationale Verfahren, wie die Kamera-Shutter-Regel, diese Eigenschaften erfüllen, ist eine interessante und philosophisch wichtige Frage. Wir geben geometrische Charakterisierungen einiger der Eigenschaften an, um diese Frage zu beantworten. Darüber hinaus schlagen wir vor, Konvexitätsnormen im Kontext der Überzeugungsbinarisierung zu diskutieren. Wir führen verschiedene Arten von Konvexitätsnormen ein und untersuchen, ob diese nämlichen Verfahren diese erfüllen. Weiters schlagen wir zwei neue Arten von Überzeugungsbinarisierungsmethoden vor, die Rationalität bewahren, aber nicht auf Schwellenwerten basieren: Distanz-basierte Binarisierung und Epistemischer-Nutzen-basierte Binarisierung. Die erste ist eine holistische Methode, die den Abstand von einem gegebenen Wahrscheinlichkeitsmaß zu der resultierenden binären Überzeugung minimiert. Der zweite basiert auf einer Genauigkeitsnorm, die die erwartete Unrichtigkeit (inaccuracy) minimiert. Wir entwickeln neue Wege, um Distanz und Unrichtigkeit zu messen. Wir entwickeln neue Wege, um Distanz und Ungenauigkeit zu messen. Darüber hinaus untersuchen wir Distanzminimierung mit Bregman-Divergenz, Nutzenmaximierung mit strikten 'proper score' und deren Beziehung. Direkte heterogene Überzeugungsaggregationsregeln werden ebenfalls vorgeschlagen und hinsichtlich Schwellenwert, Distanz und epistemischer Nützlichkeit untersucht. Wir erstellen eine Klassifizierung und Charakterisierung dieser Regeln. Darüber hinaus untersuchen wir verschiedene, besonders in sozialen Kontexten relevante Normen, wie verschiedene Einstimmigkeits- und Konvexitätsnormen, die in sozialen Kontexten interpretiert werden, und Kommutativitätsnormen, die den Zusammenhang zwischen direkten Regeln und Kombinationen von probabilistischem Meinungspooling und kollektiver Überzeugungsbinarisierung aufzeigen. Zusammenfassend kommen wir zu dem Schluss, dass heterogene Überzeugungsaggregation ein philosophisch fruchtbares Thema darstellt, das Aufmerksamkeit verdient. Heterogene Überzeugungsaggregation kann als ein allgemeiner Rahmen angesehen werden, in dem nicht nur heterogene Überzeugungsaggregation, sondern auch pobabilistisches Meinungspooling, Urteilsaggregation und Überzeugungsbinarisierung gemeinsam untersucht werden können. Erstens ist die Untersuchung der heterogenen Überzeugungssaggregation an sich interessant und lässt sich nicht auf andere Forschungsfelder reduzieren: Wir können uns mit unterschiedlichen Rationalitätsnormen in sozialen Kontexten auseinandersetzen und Eigenschaften der heterogenen Überzeugungsaggregation charakterisieren. Darüber hinaus sind es nicht nur die direkten Regeln, sondern auch die verschiedenen möglichen Kombinationen von Methoden aus unterschiedlichen Forschungsgebieten, die diese Unternehmung zu mehr als bloß der Summe ihrer Teile werden lassen. In der Tat verbindet zweitens dieser Rahmen unabhängig entwickelte Forschungsbereiche: Einerseits können wir gut entwickelte formale Theorien der formalen Erkenntnistheorie wie Überzeugungsbinarisierungstheorien und epistemische Entscheidungstheorien auf das Überzeugungsaggregationsproblem anwenden. Andererseits ermöglicht uns dieser Rahmen, soziale Kontexte zu Überzeugungsbinarisierungsproblemen und epistemischen Entscheidungstheorien hinzuzufügen, die somit auf die Behandlung sozialer Überzeugungen ausgedehnt werden können. Unsere Theorie der heterogenen Überzeugungsaggregation kann auf das (kollektive) Überzeugungsbinarisierungsproblem und die epistemische (kollektive) Entscheidungstheorie angewendet werden. Auf diese Weise schließt diese Arbeit die Lücke zwischen individueller Erkenntnistheorie und kollektiver Erkenntnistheorie oder verkleinert dieselbe zumindest

    An axiomatic re-characterization of the Kemeny rule

    Get PDF
    The Kemeny rule is one of the well studied decision rules. In this paper we show that the Kemeny rule is the only rule which is unbiased, monotone, strongly tie-breaking, strongly gradual, and weighed tournamental. We show that these conditions are logically independent

    The Routledge Handbook of Philosophy of Economics

    Get PDF
    The most fundamental questions of economics are often philosophical in nature, and philosophers have, since the very beginning of Western philosophy, asked many questions that current observers would identify as economic. The Routledge Handbook of Philosophy of Economics is an outstanding reference source for the key topics, problems, and debates at the intersection of philosophical and economic inquiry. It captures this field of countless exciting interconnections, affinities, and opportunities for cross-fertilization. Comprising 35 chapters by a diverse team of contributors from all over the globe, the Handbook is divided into eight sections: I. Rationality II. Cooperation and Interaction III. Methodology IV. Values V. Causality and Explanation VI. Experimentation and Simulation VII. Evidence VIII. Policy The volume is essential reading for students and researchers in economics and philosophy who are interested in exploring the interconnections between the two disciplines. It is also a valuable resource for those in related fields like political science, sociology, and the humanities.</p

    Identifying Choice Correspondences:A General Method and an Experimental Implementation

    Get PDF
    We introduce a general method for identifying the sets of best alternatives of decision makers in each choice sets, i.e., their choice correspondences, experimentally. In contrast, most experiments force the choice of a single alternative in each choice set. The method allow decision makers to choose several alternatives, provide a small incentive for each alternative chosen, and then randomly select one for payment. We derive two conditions under which the method may recover the choice correspondence. First, when the incentive to choose several alternative becomes small. Second, we can at least partially identifies the choice correspondence, by obtaining supersets and subsets for each choice set. We illustrate the method with an experiment, in which subjects choose between four paid tasks. In the latter case, we can retrieve the full choice correspondence for 18% of subjects and bind it for another 40%. Using the limit result, we show that 40% of all observed choices can be rationalized by complete, reflexive and transitive preferences in the experiment, i.e., satisfy the Weak Axiom of Revealed Preferences – WARP hereafter. Weakening the classical model, incomplete preferences or just-noticeable difference preferences do not rationalize more choice correspondences. Going beyond, however, we show that complete, reflexive and transitive preferences with menu-dependent choices rationalize 96% of observed choices. Having elicited choice correspondences allows to conclude that indifference is widespread in the experiment. These results pave the way for exploring various behavioral models with a unified method

    Learning Dynamics and Reinforcement in Stochastic Games

    Full text link
    The theory of Reinforcement Learning provides learning algorithms that are guaranteed to converge to optimal behavior in single-agent learning environments. While these algorithms often do not scale well to large problems without modification, a vast amount of recent research has combined them with function approximators with remarkable success in a diverse range of large-scale and complex problems. Motivated by this success in single-agent learning environments, the first half of this work aims to study convergent learning algorithms in multi-agent environments. The theory of multi-agent learning is itself a rich subject, however classically it has confined itself to learning in iterated games where there are no state dynamics. In contrast, this work examines learning in stochastic games, where agents play one another in a temporally extended game that has nontrivial state dynamics. We do so by first defining two classes of stochastic games: Stochastic Potential Games (SPGs) and Global Stochastic Potential Games (GSPGs). We show that both games admit pure Nash equilibria, as well as further refinements of their equilibrium sets. We discuss possible applications of these games in the context of congestion and traffic routing scenarios. Finally, we define learning algorithms that 1. converge to pure Nash equilibria and 2. converge to further refinements of Nash equilibria. In the final chapter we combine a simple type of multi-agent learning - individual Q-learning - with neural networks in order to solve a large scale vehicle routing and assignment problem. Individual Q-learning is a heuristic learning algorithm that, even in small multi-agent problems, does not provide convergence guarantees. Nonetheless, we observe good performance of this algorithm in this setting.PHDMathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155158/1/johnholl_1.pd

    Stability and Robustness in Misspecified Learning Models

    Get PDF
    We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that refines existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning

    Incentive-Compatible Inference from Subjective Opinions Without Common Belief Systems

    Get PDF
    Abstract. Peer-prediction mechanisms elicit information about unverifiable or subjective states of the world. Existing mechanisms in the class are designed so participants maximize their expected payments when reporting honestly. However, these mechanisms do not account for participants desiring influence over how reports are used. When participants want the conclusions drawn from reports to reflect their own opinion, the inference procedure must be subjected to incentive-compatibility constraints to ensure honesty. In this paper, I develop mechanisms without payments for discerning the true answer to a binary question, even in the presence of a false consensus. I first characterize all continuous, neutral, and anonymous mechanisms in this setting that can be implemented in interimrationalizable strategies. Using this representation, I optimize across the class of mechanisms for accuracy in distinguishing the true state. Because the mechanism does not require knowledge of the distribution of agent types and is neutral between both outcomes, it can serve as a test for bias in the surveyed population
    corecore