8,595 research outputs found

    Repetition learning is neither a continuous nor an implicit process

    Full text link
    Learning advances through repetition. A classic paradigm for studying this process is the Hebb repetition effect: Immediate serial recall performance improves for lists presented repeatedly as compared to nonrepeated lists. Learning in the Hebb paradigm has been described as a slow but continuous accumulation of long-term memory traces over repetitions [e.g., Page & Norris, Phil. Trans. R. Soc. B 364, 3737–3753 (2009)]. Furthermore, it has been argued that Hebb repetition learning requires no awareness of the repetition, thereby being an instance of implicit learning [e.g., Guérard et al., Mem. Cogn. 39, 1012–1022 (2011); McKelvie, J. Gen. Psychol. 114, 75–88 (1987)]. While these assumptions match the data from a group-level perspective, another picture emerges when analyzing data on the individual level. We used a Bayesian hierarchical mixture modeling approach to describe individual learning curves. In two preregistered experiments, using a visual and a verbal Hebb repetition task, we demonstrate that 1) individual learning curves show an abrupt onset followed by rapid growth, with a variable time for the onset of learning across individuals, and that 2) learning onset was preceded by, or coincided with, participants becoming aware of the repetition. These results imply that repetition learning is not implicit and that the appearance of a slow and gradual accumulation of knowledge is an artifact of averaging over individual learning curves

    A walk in the statistical mechanical formulation of neural networks

    Full text link
    Neural networks are nowadays both powerful operational tools (e.g., for pattern recognition, data mining, error correction codes) and complex theoretical models on the focus of scientific investigation. As for the research branch, neural networks are handled and studied by psychologists, neurobiologists, engineers, mathematicians and theoretical physicists. In particular, in theoretical physics, the key instrument for the quantitative analysis of neural networks is statistical mechanics. From this perspective, here, we first review attractor networks: starting from ferromagnets and spin-glass models, we discuss the underlying philosophy and we recover the strand paved by Hopfield, Amit-Gutfreund-Sompolinky. One step forward, we highlight the structural equivalence between Hopfield networks (modeling retrieval) and Boltzmann machines (modeling learning), hence realizing a deep bridge linking two inseparable aspects of biological and robotic spontaneous cognition. As a sideline, in this walk we derive two alternative (with respect to the original Hebb proposal) ways to recover the Hebbian paradigm, stemming from ferromagnets and from spin-glasses, respectively. Further, as these notes are thought of for an Engineering audience, we highlight also the mappings between ferromagnets and operational amplifiers and between antiferromagnets and flip-flops (as neural networks -built by op-amp and flip-flops- are particular spin-glasses and the latter are indeed combinations of ferromagnets and antiferromagnets), hoping that such a bridge plays as a concrete prescription to capture the beauty of robotics from the statistical mechanical perspective.Comment: Contribute to the proceeding of the conference: NCTA 2014. Contains 12 pages,7 figure

    A Manifesto of Nodalism

    Get PDF
    This paper proposes the notion of Nodalism as a means describing contemporary culture and of understanding my own creative practice in electronic music composition. It draws on theories and ideas from Kirby, Bauman, Bourriaud, Deleuze, Guatarri, and Gochenour, to demonstrate how networks of ideas or connectionist neural models of cognitive behaviour can be used to contextualize, understand and become a creative tool for the creation of contemporary electronic music

    Hebb's Accomplishments Misunderstood

    Get PDF
    Amit's efforts to provide stronger theoretical and empirical support for Hebb's cell-assembly concept is admirable, but we have serious reservations about the perspective presented in the target article. For Hebb, the cell assembly was a building block; by contrast, the framework proposed here eschews the need to fit the assembly into a broader picture of its function

    View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Get PDF
    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. While simulations of these models recapitulate the ventral stream's progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture
    • …
    corecore