49,746 research outputs found

    Learning and Games

    Get PDF
    Part of the Volume on the Ecology of Games: Connecting Youth, Games, and Learning In this chapter, I argue that good video games recruit good learning and that a game's design is inherently connected to designing good learning for players. I start with a perspective on learning now common in the Learning Sciences that argues that people primarily think and learn through experiences they have had, not through abstract calculations and generalizations. People store these experiences in memory -- and human long-term memory is now viewed as nearly limitless -- and use them to run simulations in their minds to prepare for problem solving in new situations. These simulations help them to form hypotheses about how to proceed in the new situation based on past experiences. The chapter also discusses the conditions experience must meet if it is to be optimal for learning and shows how good video games can deliver such optimal learning experiences. Some of the issues covered include: identity and learning; models and model-based thinking; the control of avatars and "empathy for a complex system"; distributed intelligence and cross-functional teams for learning; motivation, and ownership; emotion in learning; and situated meaning, that is, the ways in which games represent verbal meaning through images, actions, and dialogue, not just other words and definitions

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    What is Computational Intelligence and where is it going?

    Get PDF
    What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with ``computational intelligence'' in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed

    Cognitive abilities and behavior in strategic-form games.*

    Get PDF
    This paper investigates the relation between cognitive abilities and behavior in strategic-form games with the help of a novel experiment. The design allows us first to measure the cognitive abilities of subjects without confound and then to evaluate their impact on behaviour in strategic-from games. We find that subjects with better cognitive abilities show more sophisticated behavior and make better use of information on cognitive abilities and preferences of opponents. Although we do not find evidence for Nash behavior, observed behaviour is remarkably sophisticated, as almost 80% of subjects behave near optimal and outperform Nash behavior with respect to expected pay-offs.cognitive ability; behaviours; strategic-form games; experiments; preferences; sophistication

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Neuroeconomics: How Neuroscience Can Inform Economics

    Get PDF
    Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified ß-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information

    Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence

    Get PDF
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possi-bility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Tu-ring test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify con-cepts as well as participate in social practices
    corecore