104,014 research outputs found
An Inductive Method of Measuring Studentsâ Cognitive and Affective Processes via Self-Reports in Digital Learning Environments
Student affect can play a profoundly important role in students\u27 post-school lives. Understanding students\u27 affective states within online learning environments in particular has become an important matter of research, as digital tutoring systems have the potential to intervene at the moment that students are struggling and becoming frustrated, bored or disengaged. However, despite the importance of assessing students\u27 affective states, there is no clear consensus about what emotions are most important to assess, nor how these emotions can be best measured.
This dissertation investigates studentsâ self-reports of their emotions and causal attributions of those emotions collected while they are solving math problems within a mathematics tutoring system. These self-reports are collected in two conditions: through limited choice Likert response and through open response text boxes. The conditions are combined with studentsâ cognitive attributions to describe epistemic (neither purely affective nor purely cognitive) emotions in order to explain the relationship between observable student behaviors in the MathSpring.org tutoring system and student affect. These factors include beliefs, expectations, motivations, and perceptions of ability and control. A special emphasis of this dissertation is on analyzing the role of causal attributions for the events and appraisals of the learning environment, as possible causes of student behaviors, performance, and affect
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Towards a framework for investigating tangible environments for learning
External representations have been shown to play a key role in mediating cognition. Tangible environments offer the opportunity for novel representational formats and combinations, potentially increasing representational power for supporting learning. However, we currently know little about the specific learning benefits of tangible environments, and have no established framework within which to analyse the ways that external representations work in tangible environments to support learning. Taking external representation as the central focus, this paper proposes a framework for investigating the effect of tangible technologies on interaction and cognition. Key artefact-action-representation relationships are identified, and classified to form a structure for investigating the differential cognitive effects of these features. An example scenario from our current research is presented to illustrate how the framework can be used as a method for investigating the effectiveness of differential designs for supporting science learning
Talking about routines in the field: the emergence of organizational capabilities in a new cellular phone network company
No abstract availabl
The propositional nature of human associative learning
The past 50 years have seen an accumulation of evidence suggesting that associative learning depends oil high-level cognitive processes that give rise to propositional knowledge. Yet, many learning theorists maintain a belief in a learning mechanism in which links between mental representations are formed automatically. We characterize and highlight the differences between the propositional and link approaches, and review the relevant empirical evidence. We conclude that learning is the consequence of propositional reasoning processes that cooperate with the unconscious processes involved in memory retrieval and perception. We argue that this new conceptual framework allows many of the important recent advances in associative learning research to be retained, but recast in a model that provides a firmer foundation for both immediate application and future research
The Mode of Computing
The Turing Machine is the paradigmatic case of computing machines, but there
are others, such as Artificial Neural Networks, Table Computing,
Relational-Indeterminate Computing and diverse forms of analogical computing,
each of which based on a particular underlying intuition of the phenomenon of
computing. This variety can be captured in terms of system levels,
re-interpreting and generalizing Newell's hierarchy, which includes the
knowledge level at the top and the symbol level immediately below it. In this
re-interpretation the knowledge level consists of human knowledge and the
symbol level is generalized into a new level that here is called The Mode of
Computing. Natural computing performed by the brains of humans and non-human
animals with a developed enough neural system should be understood in terms of
a hierarchy of system levels too. By analogy from standard computing machinery
there must be a system level above the neural circuitry levels and directly
below the knowledge level that is named here The mode of Natural Computing. A
central question for Cognition is the characterization of this mode. The Mode
of Computing provides a novel perspective on the phenomena of computing,
interpreting, the representational and non-representational views of cognition,
and consciousness.Comment: 35 pages, 8 figure
Reframing the L2 learning experience as narrative reconstructions of classroom learning
In this study we investigate the situated and dynamic nature of the L2 learning experience through a newly-purposed instrument called the Language Learning Story Interview, adapted from McAdamsâ life story interview (2007). Using critical case sampling, data were collected from an equal number of learners of various L2s (e.g., Arabic, English, Mandarin, Spanish) and analyzed using qualitative comparative analysis (Rihoux & Ragin, 2009). Through our data analysis, we demonstrate how language learners construct overarching narratives of the L2 learning experience and what the characteristic features and components that make up these narratives are. Our results provide evidence for prototypical nuclear scenes (McAdams et al., 2004) as well as core specifications and parameters of learnersâ narrative accounts of the L2 learning experience. We discuss how these shape motivation and language learning behavior
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
- âŠ