66,634 research outputs found

    Knowledge and Blameworthiness

    Full text link
    Blameworthiness of an agent or a coalition of agents is often defined in terms of the principle of alternative possibilities: for the coalition to be responsible for an outcome, the outcome must take place and the coalition should have had a strategy to prevent it. In this article we argue that in the settings with imperfect information, not only should the coalition have had a strategy, but it also should have known that it had a strategy, and it should have known what the strategy was. The main technical result of the article is a sound and complete bimodal logic that describes the interplay between knowledge and blameworthiness in strategic games with imperfect information

    Learning backward induction: a neural network agent approach

    Get PDF
    This paper addresses the question of whether neural networks (NNs), a realistic cognitive model of human information processing, can learn to backward induce in a two-stage game with a unique subgame-perfect Nash equilibrium. The NNs were found to predict the Nash equilibrium approximately 70% of the time in new games. Similarly to humans, the neural network agents are also found to suffer from subgame and truncation inconsistency, supporting the contention that they are appropriate models of general learning in humans. The agents were found to behave in a bounded rational manner as a result of the endogenous emergence of decision heuristics. In particular a very simple heuristic socialmax, that chooses the cell with the highest social payoff explains their behavior approximately 60% of the time, whereas the ownmax heuristic that simply chooses the cell with the maximum payoff for that agent fares worse explaining behavior roughly 38%, albeit still significantly better than chance. These two heuristics were found to be ecologically valid for the backward induction problem as they predicted the Nash equilibrium in 67% and 50% of the games respectively. Compared to various standard classification algorithms, the NNs were found to be only slightly more accurate than standard discriminant analyses. However, the latter do not model the dynamic learning process and have an ad hoc postulated functional form. In contrast, a NN agent’s behavior evolves with experience and is capable of taking on any functional form according to the universal approximation theorem.

    Collaborative decision making by ensemble rule based classification systems

    Get PDF

    Equilibria with social security

    Get PDF
    We model pay-as-you-go (PAYG) social sucurity systems as the outcome of majority voting within a standard OLG model with production and an exogenous population growth rateo At each point in time individuals work, save, consume and invest by taking the social security policy as given. The latter consists of a tax on current wages transferred to elderly people. When they vote, individuals have to make two choices: If they want to keep the committment made by the previous generation by paying the elderly the promised amount of benefits, and which amount they want paid to themselves next periodo We show that when the growth rate of population is high enough compared to the productivity of capital there exists an equilibrium where PAYG pensions are voted into existence and maintained. PAYG systems are kept even when everybody knows that they will surely be abondoned, and that some generation will pay and not be paid back. We characterize the steady state and dynamic properties of these equilibria and study their welfare properties. Equilibria achieved by voting are typically inefficient; however, they may be so due to overaccumulation, as well as, in other cases, due to under accumulation. On the other hand, the efficient steady states turn out to be dynamically unstable: so we are presenting an unpleasant alternative for policy making

    Durable-Goods Monopoly with Varying Cohorts

    Get PDF
    mechanism design, pricing, optimal stopping

    The Audit Logic: Policy Compliance in Distributed Systems

    Get PDF
    We present a distributed framework where agents can share data along with usage policies. We use an expressive policy language including conditions, obligations and delegation. Our framework also supports the possibility to refine policies. Policies are not enforced a-priori. Instead policy compliance is checked using an a-posteriri auditing approach. Policy compliance is shown by a (logical) proof that the authority can systematically check for validity. Tools for automatically checking and generating proofs are also part of the framework.\u
    corecore