145,862 research outputs found

    Beyond foraging: behavioral science and the future of institutional economics

    Get PDF
    Institutions affect economic outcomes, but variation in them cannot be directly linked to environmental factors such as geography, climate, or technological availabilities. Game theoretic approaches, based as they typically are on foraging only assumptions, do not provide an adequate foundation for understanding the intervening role of politics and ideology; nor does the view that culture and institutions are entirely socially constructed. Understanding what institutions are and how they influence behavior requires an approach that is in part biological, focusing on cognitive and behavioral adaptations for social interaction favored in the past by group selection. These adaptations, along with their effects on canalizing social learning, help to explain uniformities in political and social order, and are the bedrock upon which we build cultural and institutional variability

    Are there new models of computation? Reply to Wegner and Eberbach

    Get PDF
    Wegner and Eberbach[Weg04b] have argued that there are fundamental limitations to Turing Machines as a foundation of computability and that these can be overcome by so-called superTuring models such as interaction machines, the [pi]calculus and the $-calculus. In this paper we contest Weger and Eberbach claims

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201
    corecore