16,650 research outputs found

    Evolutionary Tournament-Based Comparison of Learning and Non-Learning Algorithms for Iterated Games

    Get PDF
    Evolutionary tournaments have been used effectively as a tool for comparing game-playing algorithms. For instance, in the late 1970's, Axelrod organized tournaments to compare algorithms for playing the iterated prisoner's dilemma (PD) game. These tournaments capture the dynamics in a population of agents that periodically adopt relatively successful algorithms in the environment. While these tournaments have provided us with a better understanding of the relative merits of algorithms for iterated PD, our understanding is less clear about algorithms for playing iterated versions of arbitrary single-stage games in an environment of heterogeneous agents. While the Nash equilibrium solution concept has been used to recommend using Nash equilibrium strategies for rational players playing general-sum games, learning algorithms like fictitious play may be preferred for playing against sub-rational players. In this paper, we study the relative performance of learning and non-learning algorithms in an evolutionary tournament where agents periodically adopt relatively successful algorithms in the population. The tournament is played over a testbed composed of all possible structurally distinct 2×2 conflicted games with ordinal payoffs: a baseline, neutral testbed for comparing algorithms. Before analyzing results from the evolutionary tournament, we discuss the testbed, our choice of representative learning and non-learning algorithms and relative rankings of these algorithms in a round-robin competition. The results from the tournament highlight the advantage of learning algorithms over players using static equilibrium strategies for repeated plays of arbitrary single-stage games. The results are likely to be of more benefit compared to work on static analysis of equilibrium strategies for choosing decision procedures for open, adapting agent society consisting of a variety of competitors.Repeated Games, Evolution, Simulation

    Synergistic Team Composition

    Full text link
    Effective teams are crucial for organisations, especially in environments that require teams to be constantly created and dismantled, such as software development, scientific experiments, crowd-sourcing, or the classroom. Key factors influencing team performance are competences and personality of team members. Hence, we present a computational model to compose proficient and congenial teams based on individuals' personalities and their competences to perform tasks of different nature. With this purpose, we extend Wilde's post-Jungian method for team composition, which solely employs individuals' personalities. The aim of this study is to create a model to partition agents into teams that are balanced in competences, personality and gender. Finally, we present some preliminary empirical results that we obtained when analysing student performance. Results show the benefits of a more informed team composition that exploits individuals' competences besides information about their personalities

    Welfare, Labor Supply and Heterogeneous Preferences: Evidence for Europe and the US

    Get PDF
    Following the report of the Stiglitz Commission, measuring and comparing well-being across countries has gained renewed interest. Yet, analyses that go beyond income and incorporate non-market dimensions of welfare most often rely on the assumption of identical preferences to avoid the difficulties related to interpersonal comparisons. In this paper, we suggest an international comparison based on individual welfare rankings that fully retain preference heterogeneity. Focusing on the consumption-leisure trade-off, we estimate discrete choice labor supply models using harmonized microdata for 11 European countries and the US. We retrieve preference heterogeneity within and across countries and analyze several welfare criteria which take into account that differences in income are partly due to differences in tastes. The resulting welfare rankings clearly depend on the normative treatment of preference heterogeneity with alternative metrics. We show that these differences can indeed be explained by estimated preference heterogeneity across countries – rather than demographic composition.welfare measures, preference heterogeneity, labor supply, Beyond GDP

    Contextual Centrality: Going Beyond Network Structures

    Full text link
    Centrality is a fundamental network property which ranks nodes by their structural importance. However, structural importance may not suffice to predict successful diffusions in a wide range of applications, such as word-of-mouth marketing and political campaigns. In particular, nodes with high structural importance may contribute negatively to the objective of the diffusion. To address this problem, we propose contextual centrality, which integrates structural positions, the diffusion process, and, most importantly, nodal contributions to the objective of the diffusion. We perform an empirical analysis of the adoption of microfinance in Indian villages and weather insurance in Chinese villages. Results show that contextual centrality of the first-informed individuals has higher predictive power towards the eventual adoption outcomes than other standard centrality measures. Interestingly, when the product of diffusion rate pp and the largest eigenvalue λ1\lambda_1 is larger than one and diffusion period is long, contextual centrality linearly scales with eigenvector centrality. This approximation reveals that contextual centrality identifies scenarios where a higher diffusion rate of individuals may negatively influence the cascade payoff. Further simulations on the synthetic and real-world networks show that contextual centrality has the advantage of selecting an individual whose local neighborhood generates a high cascade payoff when pλ1<1p \lambda_1 < 1. Under this condition, stronger homophily leads to higher cascade payoff. Our results suggest that contextual centrality captures more complicated dynamics on networks and has significant implications for applications, such as information diffusion, viral marketing, and political campaigns

    A Distance-Based Test of Association Between Paired Heterogeneous Genomic Data

    Full text link
    Due to rapid technological advances, a wide range of different measurements can be obtained from a given biological sample including single nucleotide polymorphisms, copy number variation, gene expression levels, DNA methylation and proteomic profiles. Each of these distinct measurements provides the means to characterize a certain aspect of biological diversity, and a fundamental problem of broad interest concerns the discovery of shared patterns of variation across different data types. Such data types are heterogeneous in the sense that they represent measurements taken at very different scales or described by very different data structures. We propose a distance-based statistical test, the generalized RV (GRV) test, to assess whether there is a common and non-random pattern of variability between paired biological measurements obtained from the same random sample. The measurements enter the test through distance measures which can be chosen to capture particular aspects of the data. An approximate null distribution is proposed to compute p-values in closed-form and without the need to perform costly Monte Carlo permutation procedures. Compared to the classical Mantel test for association between distance matrices, the GRV test has been found to be more powerful in a number of simulation settings. We also report on an application of the GRV test to detect biological pathways in which genetic variability is associated to variation in gene expression levels in ovarian cancer samples, and present results obtained from two independent cohorts

    Targeting Conservation Investments in Heterogeneous Landscapes: A distance function approach and application to watershed management

    Get PDF
    To achieve a given level of an environmental amenity at least cost, decision-makers must integrate information about spatially variable biophysical and economic conditions. Although the biophysical attributes that contribute to supplying an environmental amenity are often known, the way in which these attributes interact to produce the amenity is often unknown. Given the difficulty in converting multiple attributes into a unidimensional physical measure of an environmental amenity (e.g., habitat quality), analyses in the academic literature tend to use a single biophysical attribute as a proxy for the environmental amenity (e.g., species richness). A narrow focus on a single attribute, however, fails to consider the full range of biophysical attributes that are critical to the supply of an environmental amenity. Drawing on the production efficiency literature, we introduce an alternative conservation targeting approach that relies on distance functions to cost-efficiently allocate conservation funds across a spatially heterogeneous landscape. An approach based on distance functions has the advantage of not requiring a parametric specification of the amenity function (or cost function), but rather only requiring that the decision-maker identify important biophysical and economic attributes. We apply the distance-function approach empirically to an increasingly common, but little studied, conservation initiative: conservation contracting for water quality objectives. The contract portfolios derived from the distance-function application have many desirable properties, including intuitive appeal, robust performance across plausible parametric amenity measures, and the generation of ranking measures that can be easily used by field practitioners in complex decision-making environments that cannot be completely modeled. Working Paper # 2002-01

    Dominance relations when both quantity and quality matter, and applications to the\r\ncomparison of US research universities and worldwide top departments in economics

    Get PDF
    In this article, we propose an extension of the concept of stochastic dominance intensively\r\nused in economics for the comparison of composite outcomes both the quality and the\r\nquantity of which do matter. Our theory also allows us to require unanimity of judgement\r\namong new classes of functions. We apply this theory to the ranking of US research\r\nuniversities, thereby providing a new tool to scientometricians (and the academic\r\ncommunities) who typically aim to compare research institutions taking into account both\r\nthe volume of publications and the impact of these articles. Another application is provided\r\nfor comparing and ranking academic departments when one takes into account both the size\r\nof the department and the prestige of each member.Ranking, dominance relations, citations.

    Interbank markets and multiplex networks: centrality measures and statistical null models

    Full text link
    The interbank market is considered one of the most important channels of contagion. Its network representation, where banks and claims/obligations are represented by nodes and links (respectively), has received a lot of attention in the recent theoretical and empirical literature, for assessing systemic risk and identifying systematically important financial institutions. Different types of links, for example in terms of maturity and collateralization of the claim/obligation, can be established between financial institutions. Therefore a natural representation of the interbank structure which takes into account more features of the market, is a multiplex, where each layer is associated with a type of link. In this paper we review the empirical structure of the multiplex and the theoretical consequences of this representation. We also investigate the betweenness and eigenvector centrality of a bank in the network, comparing its centrality properties across different layers and with Maximum Entropy null models.Comment: To appear in the book "Interconnected Networks", A. Garas e F. Schweitzer (eds.), Springer Complexity Serie
    corecore