741,413 research outputs found

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    A neural blackboard architecture of sentence structure

    Get PDF
    We present a neural architecture for sentence representation. Sentences are represented in terms of word representations as constituents. A word representation consists of a neural assembly distributed over the brain. Sentence representation does not result from associations between neural word assemblies. Instead, word assemblies are embedded in a neural architecture, in which the structural (thematic) relations between words can be represented. Arbitrary thematic relations between arguments and verbs can be represented. Arguments can consist of nouns and phrases, as in sentences with relative clauses. A number of sentences can be stored simultaneously in this architecture. We simulate how probe questions about thematic relations can be answered. We discuss how differences in sentence complexity, such as the difference between subject-extracted versus object-extracted relative clauses and the difference between right-branching versus center-embedded structures, can be related to the underlying neural dynamics of the model. Finally, we illustrate how memory capacity for sentence representation can be related to the nature of reverberating neural activity, which is used to store information temporarily in this architecture

    Video Fill In the Blank using LR/RL LSTMs with Spatial-Temporal Attentions

    Full text link
    Given a video and a description sentence with one missing word (we call it the "source sentence"), Video-Fill-In-the-Blank (VFIB) problem is to find the missing word automatically. The contextual information of the sentence, as well as visual cues from the video, are important to infer the missing word accurately. Since the source sentence is broken into two fragments: the sentence's left fragment (before the blank) and the sentence's right fragment (after the blank), traditional Recurrent Neural Networks cannot encode this structure accurately because of many possible variations of the missing word in terms of the location and type of the word in the source sentence. For example, a missing word can be the first word or be in the middle of the sentence and it can be a verb or an adjective. In this paper, we propose a framework to tackle the textual encoding: Two separate LSTMs (the LR and RL LSTMs) are employed to encode the left and right sentence fragments and a novel structure is introduced to combine each fragment with an "external memory" corresponding the opposite fragments. For the visual encoding, end-to-end spatial and temporal attention models are employed to select discriminative visual representations to find the missing word. In the experiments, we demonstrate the superior performance of the proposed method on challenging VFIB problem. Furthermore, we introduce an extended and more generalized version of VFIB, which is not limited to a single blank. Our experiments indicate the generalization capability of our method in dealing with such more realistic scenarios

    Specificity and definiteness in sentence and discourse structure

    Get PDF
    In this paper, I argue that this informally given list of characteristics covers only a certain subclass of specific indefinites. […] In particular, I dispute the definition of specific indefinites as "the speaker has the referent in mind" as rather confusing if one is working with a semantic theory. Furthermore, I discuss "relative specificity", it. cases in which the specific indefinite does not exhibit wide, but intermediate or narrow scope behavior. Based on such data, I argue that specificity expresses a referential dependency between introduced discourse items. Informally speaking, the specificity of the indefinite expression something [...] expresses that the reference of the expression depends on the reference of another expression, here, on the expression a monk, not the speaker

    Mathematical Foundations for a Compositional Distributional Model of Meaning

    Full text link
    We propose a mathematical framework for a unification of the distributional theory of meaning in terms of vector space models, and a compositional theory for grammatical types, for which we rely on the algebra of Pregroups, introduced by Lambek. This mathematical framework enables us to compute the meaning of a well-typed sentence from the meanings of its constituents. Concretely, the type reductions of Pregroups are `lifted' to morphisms in a category, a procedure that transforms meanings of constituents into a meaning of the (well-typed) whole. Importantly, meanings of whole sentences live in a single space, independent of the grammatical structure of the sentence. Hence the inner-product can be used to compare meanings of arbitrary sentences, as it is for comparing the meanings of words in the distributional model. The mathematical structure we employ admits a purely diagrammatic calculus which exposes how the information flows between the words in a sentence in order to make up the meaning of the whole sentence. A variation of our `categorical model' which involves constraining the scalars of the vector spaces to the semiring of Booleans results in a Montague-style Boolean-valued semantics.Comment: to appea

    Syntactic and semantic contributions of pitch accents during sentence comprehension

    Get PDF
    Syntactic, semantic, and prosodic cues all establish expectations that guide sentence comprehension. In the prosodic domain, pitch accents can assign contrastive focus and resolve a syntactically ambiguous phrase. However, can prosodic focus marking (by pitch accenting) influence the interpretation of a sentence in the presence of syntactic and semantic cues? Our auditory experiment revolved around the sentence (in German) “Yesterday the policeman arrested the thief, not the murderer”. A pitch accent on either POLICEMAN or THIEF placed one of those arguments in contrastive focus with the ellipsis structure ("the murderer”). The two contrasted arguments could contain violations: in the syntax condition, the grammatical case of the article in the ellipsis structure mismatched the focused constituent in the main clause (nominative vs. accusative). In the semantic condition, the thematic roles of the contrasted words were incongruent (typical agent vs. patient roles of “arrest”). Visual comprehension questions probed the agent/patient role of the arguments in the sentence (subject or object), followed by a button-press response. Reaction times showed that if the pitch accent marked syntactic information that mismatched the syntactic information in the ellipsis structure, responses were delayed. The direction of the semantic effect depended on the focused noun. The response patterns showed that participants were led by the syntactic information to make their syntactic judgements, despite a conflicting expectation established by prosody. The experiment shows that pitch accents establish a syntactic expectation during sentence comprehension. However, these expectations are overwritten by incoming syntactic information to yield an interpretation of the sentence

    Beyond Q-Resolution and Prenex Form: A Proof System for Quantified Constraint Satisfaction

    Get PDF
    We consider the quantified constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quantification, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results

    THE SYNTACTIC OF ENGLISH SENTENCE STRUCTURE IN BRAD BIRD’ THE INCREDIBLES

    Get PDF
    DIANA: This study particularly adopt from Syntactic structure, which one of the branch of linguistic field that triggered sentence structure. Syntactic structure is pointed to reveal some unit sentence: to reveal the hierarchy in the ordering of element, to explain how surface ambiguities come out, and to demonstrate the relatedness of certain sentence (Chomsky, 1957). The main purpose of this study attempts to analyze of movie script in selected scene used by clause structure and phrase structure. It concern in sentence structure is concerned with phrase structure or tree diagram resources to reveal the role of language toward contains of the text, especially the clarity of meaning in the text under the sentence structure. It is appear in the structure or element of the text under the phrase structure or clause structure. Syntactic English sentence structure is a system of the clause and phrase. Whereas, English sentence indicate that has a rule to short or expand its sentence. The structure of English sentence can be seen from the elements of each sentence (Berry, 1977 & Halliday, 2002). This study was conducted by using descriptive qualitative design. The finding of this study concern sentence types has been calculated. Thus, the researcher has concluded that there are 106 sentences are identified, 64 sentences are simple sentence (60,37%), 15 sentences are compound sentence (14,15%), 22 sentences are complex sentence (20,75%), and 5 sentences are compoundcomplex sentence (4,71%). Nevertheless, the dominant sentences are simple sentence. Then, the lower occurrences of sentence types occurred in compoundcomplex sentence. This study also construe devices to identify the sentence structure portrayed in tree diagram. Whereas, there are three patterns phrase structure rule that occurred in four sentence type such as: S NP-VP, S NPMod- VP, and S NP-Aux-VP. The upper occurrences occurred in first pattern and the lower occurrence occurred in the second pattern
    corecore