740,159 research outputs found

    Explicit learning in ACT-R

    Get PDF
    A popular distinction in the learning literature is the distinction between implicit and explicit learning. Although many studies elaborate on the nature of implicit learning, little attention is left for explicit learning. The unintentional aspect of implicit learning corresponds well to the mechanistic view of learning employed in architectures of cognition. But how to account for deliberate, intentional, explicit learning? This chapter argues that explicit learning can be explained by strategies that exploit implicit learning mechanisms. This idea is explored and modelled using the ACT-R theory (Anderson, 1993). An explicit strategy for learning facts in ACT-R’s declarative memory is rehearsal, a strategy that uses ACT-R’s activation learning mechanisms to gain deliberate control over what is learned. In the same sense, strategies for explicit procedural learning are proposed. Procedural learning in ACT-R involves generalisation of examples. Explicit learning rules can create and manipulate these examples. An example of these explicit rules will be discussed. These rules are general enough to be able to model the learning of three different tasks. Furthermore, the last of these models can explain the difference between adults and children in the discrimination-shift task

    Explicit and Implicit Processes in Human Aversive Conditioning

    Get PDF
    The ability to adapt to a changing environment is central to an organism’s success. The process of associating two stimuli (as in associative conditioning) requires very little in the way of neural machinery. In fact, organisms with only a few hundred neurons show conditioning that is specific to an associated cue. This type of learning is commonly referred to as implicit learning. The learning can be performed in the absence of the subject’s ability to describe it. One example of learning that is thought to be implicit is delay conditioning. Delay conditioning consists of a single cue (a tone, for example) that starts before, and then overlaps with, an outcome (like a pain stimulus). In addition to associating sensory cues, humans routinely link abstract concepts with an outcome. This more complex learning is often described as explicit since subjects are able to describe the link between the stimulus and outcome. An example of conditioning that requires this type of knowledge is trace conditioning. Trace conditioning includes a separation of a few seconds between the cue and outcome. Explicit learning is often proposed to involve a separate system, but the degree of separation between implicit associations and explicit learning is still debated. We describe aversive conditioning experiments in human subjects used to study the degree of interaction that takes place between explicit and implicit systems. We do this in three ways. First, if a higher order task (in this case a working memory task) is performed during conditioning, it reduces not only explicit learning but also implicit learning. Second, we describe the area of the brain involved in explicit learning during conditioning and confirm that it is active during both trace and delay conditioning. Third, using functional magnetic resonance imaging (fMRI), we describe hemodynamic activity changes in perceptual areas of the brain that occur during delay conditioning and persist after the learned association has faded. From these studies, we conclude that there is a strong interaction between explicit and implicit learning systems, with one often directly changing the function of the other.</p

    Robust learning with implicit residual networks

    Full text link
    In this effort, we propose a new deep architecture utilizing residual blocks inspired by implicit discretization schemes. As opposed to the standard feed-forward networks, the outputs of the proposed implicit residual blocks are defined as the fixed points of the appropriately chosen nonlinear transformations. We show that this choice leads to the improved stability of both forward and backward propagations, has a favorable impact on the generalization power and allows to control the robustness of the network with only a few hyperparameters. In addition, the proposed reformulation of ResNet does not introduce new parameters and can potentially lead to a reduction in the number of required layers due to improved forward stability. Finally, we derive the memory-efficient training algorithm, propose a stochastic regularization technique and provide numerical results in support of our findings

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Implicit and explicit learning in ACT-R

    Get PDF
    A useful way to explain the notions of implicit and explicit learning in ACT-R is to define implicit learning as learning by ACT-R's learning mechanisms, and explicit learning as the results of learning goals. This idea complies with the usual notion of implicit learning as unconscious and always active and explicit learning as intentional and conscious. Two models will be discussed to illustrate this point. First a model of a classical implicit memory task, the SUGARFACTORY scenario by Berry & Broadbent (1984) will be discussed, to show how ACT-R can model implicit learning. The second model is of the so-called Fincham task (Anderson & Fincham, 1994), and exhibits both implicit and explicit learning
    corecore