23,475 research outputs found

    TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning

    Get PDF
    Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning

    The spectro-contextual encoding and retrieval theory of episodic memory.

    Get PDF
    The spectral fingerprint hypothesis, which posits that different frequencies of oscillations underlie different cognitive operations, provides one account for how interactions between brain regions support perceptual and attentive processes (Siegel etal., 2012). Here, we explore and extend this idea to the domain of human episodic memory encoding and retrieval. Incorporating findings from the synaptic to cognitive levels of organization, we argue that spectrally precise cross-frequency coupling and phase-synchronization promote the formation of hippocampal-neocortical cell assemblies that form the basis for episodic memory. We suggest that both cell assembly firing patterns as well as the global pattern of brain oscillatory activity within hippocampal-neocortical networks represents the contents of a particular memory. Drawing upon the ideas of context reinstatement and multiple trace theory, we argue that memory retrieval is driven by internal and/or external factors which recreate these frequency-specific oscillatory patterns which occur during episodic encoding. These ideas are synthesized into a novel model of episodic memory (the spectro-contextual encoding and retrieval theory, or "SCERT") that provides several testable predictions for future research

    Effects of labelling on object perception and categorisation in infants

    Get PDF
    How do labels impact object perception and enhance categorisation? This question has been the focus of substantial theoretical debate, particularly in the developmental literature, with conflicting results. Specifically, whether labels for objects act as additional perceptual features or instead as referential pointers to category concepts has been the subject of intense debate. In this thesis, we attempted to shed a new light on this question, combining empirical results on both infants and adults, and neurocomputational models. First, we developed a dual-memory neurocomputational model of long-term learning inspired by Westermann and Mareschal's (2014) model, to test predictions of the two mains theories on labelling and categorisation on existing infant data, and to generate predictions for a follow-up study. Our modelling work suggested that for the empirical designs considered and age groups tested, labels were processed as object features, as opposed to having a more referential role. We then focused on explicitly testing potential attentional effects of auditory labels during categorisation in an empirical study. More precisely, we studied the interaction between feature salience, feature diagnosticity, and auditory labels, in a categorisation task. Surprisingly, we found that 15-month-old infants and adults could learn labelled categories in which the salient feature (head of line-drawn novel animals) was non-diagnostic of category membership, but the non-salient feature (tail) was, without adopting a different pattern of looking compared to participants in a control group. Although our data did not provide clear evidence for a true null effect, this finding was once again more compatible with the theory that labels act as features, not referents. This finding also led us to reconsider the use of eye movements and looking times as a proxy for learning, as it seemed that participants could learn more without looking more. Given our empirical results on salience and diagnosticity of features, and given the methodological differences in the handling of feature salience and diagnosticity in the categorisation literature, we developed a simple auto-encoder model to further study the impact of salience differences between features in the context of a categorisation task, with or without a label. Our simulations suggested that bigger disparities in salience between different features of an object can result in differences in terms of learning speed and compactness of categories in internal representations, hinting that future empirical studies should consider feature salience in their design. Overall then, this thesis provides some evidence in favour of the labels-as-features theory through the use of empirical eye-tracking data on infants and adults, and neurocomputational modelling. This thesis further asks new questions on the importance of feature salience in categorisation tasks, and the interpretation of eye movement and looking time data in general

    Biomedical ontology alignment: An approach based on representation learning

    Get PDF
    While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance. An ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results
    corecore