239 research outputs found

    Attainable and Relevant Moral Exemplars Are More Effective than Extraordinary Exemplars in Promoting Voluntary Service Engagement

    Get PDF
    The present study aimed to develop effective moral educational interventions based on social psychology by using stories of moral exemplars. We tested whether motivation to engage in voluntary service as a form of moral behavior was better promoted by attainable and relevant exemplars or by unattainable and irrelevant exemplars. First, experiment 1, conducted in a lab, showed that stories of attainable exemplars more effectively promoted voluntary service activity engagement among undergraduate students compared with stories of unattainable exemplars and non-moral stories. Second, experiment 2, a middle school classroom-level experiment with a quasi-experimental design, demonstrated that peer exemplars, who are perceived to be attainable and relevant to students, better promoted service engagement compared with historic figures in moral education classes

    Penalty-regulated dynamics and robust learning procedures in games

    Get PDF
    Starting from a heuristic learning scheme for N-person games, we derive a new class of continuous-time learning dynamics consisting of a replicator-like drift adjusted by a penalty term that renders the boundary of the game's strategy space repelling. These penalty-regulated dynamics are equivalent to players keeping an exponentially discounted aggregate of their on-going payoffs and then using a smooth best response to pick an action based on these performance scores. Owing to this inherent duality, the proposed dynamics satisfy a variant of the folk theorem of evolutionary game theory and they converge to (arbitrarily precise) approximations of Nash equilibria in potential games. Motivated by applications to traffic engineering, we exploit this duality further to design a discrete-time, payoff-based learning algorithm which retains these convergence properties and only requires players to observe their in-game payoffs: moreover, the algorithm remains robust in the presence of stochastic perturbations and observation errors, and it does not require any synchronization between players.Comment: 33 pages, 3 figure

    Perturbed Learning Automata in Potential Games

    Get PDF
    This paper presents a reinforcement learning algorithm and provides conditions for global convergence to Nash equilibria. For several reinforcement learning schemes, including the ones proposed here, excluding convergence to action profiles which are not Nash equilibria may not be trivial, unless the step-size sequence is appropriately tailored to the specifics of the game. In this paper, we sidestep these issues by introducing a new class of reinforcement learning schemes where the strategy of each agent is perturbed by a state-dependent perturbation function. Contrary to prior work on equilibrium selection in games, where perturbation functions are globally state dependent, the perturbation function here is assumed to be local, i.e., it only depends on the strategy of each agent. We provide conditions under which the strategies of the agents will converge to an arbitrarily small neighborhood of the set of Nash equilibria almost surely. We further specialize the results to a class of potential games

    Central limit theorems for a hypergeometric randomly reinforced urn

    Get PDF
    We consider a variant of the randomly reinforced urn where more balls can be simultaneously drawn out and balls of different colors can be simultaneously added. More precisely, at each time-step, the conditional distribution of the number of extracted balls of a certain color given the past is assumed to be hypergeometric. We prove some central limit theorems in the sense of stable convergence and of almost sure conditional convergence, which are stronger than convergence in distribution. The proven results provide asymptotic confidence intervals for the limit proportion, whose distribution is generally unknown. Moreover, we also consider the case of more urns subjected to some random common factors.Comment: 15 pages, submitted, Key-words: Central Limit Theorem; Polya urn; Randomly Reinforced Urn; Stable Convergenc
    • …
    corecore