14,633 research outputs found

    Investigation of sequence processing: A cognitive and computational neuroscience perspective

    Get PDF
    Serial order processing or sequence processing underlies many human activities such as speech, language, skill learning, planning, problem-solving, etc. Investigating the neural bases of sequence processing enables us to understand serial order in cognition and also helps in building intelligent devices. In this article, we review various cognitive issues related to sequence processing with examples. Experimental results that give evidence for the involvement of various brain areas will be described. Finally, a theoretical approach based on statistical models and reinforcement learning paradigm is presented. These theoretical ideas are useful for studying sequence learning in a principled way. This article also suggests a two-way process diagram integrating experimentation (cognitive neuroscience) and theory/ computational modelling (computational neuroscience). This integrated framework is useful not only in the present study of serial order, but also for understanding many cognitive processes

    Modelling the Lexicon in Unsupervised Part of Speech Induction

    Full text link
    Automatically inducing the syntactic part-of-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.Comment: To be presented at the 14th Conference of the European Chapter of the Association for Computational Linguistic

    Visually grounded learning of keyword prediction from untranscribed speech

    Full text link
    During language acquisition, infants have the benefit of visual cues to ground spoken language. Robots similarly have access to audio and visual sensors. Recent work has shown that images and spoken captions can be mapped into a meaningful common space, allowing images to be retrieved using speech and vice versa. In this setting of images paired with untranscribed spoken captions, we consider whether computer vision systems can be used to obtain textual labels for the speech. Concretely, we use an image-to-words multi-label visual classifier to tag images with soft textual labels, and then train a neural network to map from the speech to these soft targets. We show that the resulting speech system is able to predict which words occur in an utterance---acting as a spoken bag-of-words classifier---without seeing any parallel speech and text. We find that the model often confuses semantically related words, e.g. "man" and "person", making it even more effective as a semantic keyword spotter.Comment: 5 pages, 3 figures, 5 tables; small updates, added link to code; accepted to Interspeech 201

    A Multi-disciplinary Approach to the Investigation of Aspects of Serial Order in Cognition

    Get PDF
    Serial order processing or Sequence processing underlies many human activities such as speech, language, skill learning, planning, problem solving, etc. Investigating the\ud neural bases of sequence processing enables us to understand serial order in cognition and helps us building intelligent devices. In the current paper, various\ud cognitive issues related to sequence processing will be discussed with examples. Some of the issues are: distributed versus local representation, pre-wired versus\ud adaptive origins of representation, implicit versus explicit learning, fixed/flat versus hierarchical organization, timing aspects, order information embedded in sequences, primacy versus recency in list learning and aspects of sequence perception such as recognition, recall and generation. Experimental results that give evidence for the involvement of various brain areas will be described. Finally, theoretical frameworks based on Markov models and Reinforcement Learning paradigm will be presented. These theoretical ideas are useful for studying sequential phenomena in a principled way

    Linguistic unit discovery from multi-modal inputs in unwritten languages: Summary of the "Speaking Rosetta" JSALT 2017 Workshop

    Get PDF
    We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without orthography. We study the replacement of orthographic transcriptions by images and/or translated text in a well-resourced language to help unsupervised discovery from raw speech.Comment: Accepted to ICASSP 201

    From Word to Sense Embeddings: A Survey on Vector Representations of Meaning

    Get PDF
    Over the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence Researc

    Extracting finite structure from infinite language

    Get PDF
    This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [T. McQueen, A. Hopgood, J. Tepper, T. Allen, A recurrent self-organizing map for temporal sequence processing, in: Proceedings of Fourth International Conference in Recent Advances in Soft Computing (RASC2002), Nottingham, 2002] with laterally interconnected neurons. A derivation of functionalequivalence theory [J. Hopcroft, J. Ullman, Introduction to Automata Theory, Languages and Computation, vol. 1, Addison-Wesley, Reading, MA, 1979] is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar [A. Cleeremans, D. Schreiber, J. McClelland, Finite state automata and simple recurrent networks, Neural Computation, 1 (1989) 372–381] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set
    • 

    corecore